Query in timesten taking more time than query in oracle database

Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575

TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE               DATE,
IF_SYSTEM               VARCHAR2(32) NOT NULL,
GROUPING_ID                TT_BIGINT,
TIME_DIM_ID               TT_INTEGER NOT NULL,
BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
INSTR_DIM_ID               TT_INTEGER NOT NULL,
EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS               TT_BIGINT,
FILLED_QUANTITY          TT_BIGINT,
CNT_FILLED_QUANTITY          TT_BIGINT,
QUANTITY               TT_BIGINT,
CNT_QUANTITY               TT_BIGINT,
COMMISSION               BINARY_FLOAT,
CNT_COMMISSION               TT_BIGINT,
FILLS_NUMBER               TT_BIGINT,
CNT_FILLS_NUMBER          TT_BIGINT,
AGGRESSIVE_FILLS          TT_BIGINT,
CNT_AGGRESSIVE_FILLS          TT_BIGINT,
NOTIONAL               BINARY_FLOAT,
CNT_NOTIONAL               TT_BIGINT,
TOTAL_PRICE               BINARY_FLOAT,
CNT_TOTAL_PRICE          TT_BIGINT,
CANCELLED_ORDERS_COUNT          TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
ROUTED_ORDERS_NO          TT_BIGINT,
CNT_ROUTED_ORDERS_NO          TT_BIGINT,
ROUTED_LIQUIDITY_QTY          TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
REMOVED_LIQUIDITY_QTY          TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
ADDED_LIQUIDITY_QTY          TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
AGENT_CHARGES               BINARY_FLOAT,
CNT_AGENT_CHARGES          TT_BIGINT,
CLEARING_CHARGES          BINARY_FLOAT,
CNT_CLEARING_CHARGES          TT_BIGINT,
EXECUTION_CHARGES          BINARY_FLOAT,
CNT_EXECUTION_CHARGES          TT_BIGINT,
TRANSACTION_CHARGES          BINARY_FLOAT,
CNT_TRANSACTION_CHARGES     TT_BIGINT,
ORDER_MANAGEMENT          BINARY_FLOAT,
CNT_ORDER_MANAGEMENT          TT_BIGINT,
SETTLEMENT_CHARGES          BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES          TT_BIGINT,
RECOVERED_AGENT          BINARY_FLOAT,
CNT_RECOVERED_AGENT          TT_BIGINT,
RECOVERED_CLEARING          BINARY_FLOAT,
CNT_RECOVERED_CLEARING          TT_BIGINT,
RECOVERED_EXECUTION          BINARY_FLOAT,
CNT_RECOVERED_EXECUTION     TT_BIGINT,
RECOVERED_TRANSACTION          BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION     TT_BIGINT,
RECOVERED_ORD_MGT          BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT          TT_BIGINT,
RECOVERED_SETTLEMENT          BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
CLIENT_AGENT               BINARY_FLOAT,
CNT_CLIENT_AGENT          TT_BIGINT,
CLIENT_ORDER_MGT          BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT          TT_BIGINT,
CLIENT_EXEC               BINARY_FLOAT,
CNT_CLIENT_EXEC          TT_BIGINT,
CLIENT_TRANS               BINARY_FLOAT,
CNT_CLIENT_TRANS          TT_BIGINT,
CLIENT_CLEARING          BINARY_FLOAT,
CNT_CLIENT_CLEARING          TT_BIGINT,
CLIENT_SETTLE               BINARY_FLOAT,
CNT_CLIENT_SETTLE          TT_BIGINT,
CHARGEABLE_TAXES          BINARY_FLOAT,
CNT_CHARGEABLE_TAXES          TT_BIGINT,
VENDOR_CHARGE               BINARY_FLOAT,
CNT_VENDOR_CHARGE          TT_BIGINT,
ROUTING_CHARGES          BINARY_FLOAT,
CNT_ROUTING_CHARGES          TT_BIGINT,
RECOVERED_ROUTING          BINARY_FLOAT,
CNT_RECOVERED_ROUTING          TT_BIGINT,
CLIENT_ROUTING               BINARY_FLOAT,
CNT_CLIENT_ROUTING          TT_BIGINT,
TICKET_CHARGES               BINARY_FLOAT,
CNT_TICKET_CHARGES          TT_BIGINT,
RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
          from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns     ORACLE     TimesTen     
1     1.05     0.94     
2     1.07     1.47     
3     2.04     1.8     
4     2.06     2.08     
5     2.09     2.4     
6     3.01     2.67     
7     4.02     3.06     
8     4.03     3.37     
9     4.04     3.62     
10     4.06     4.02     
11     4.08     4.31     
12     4.09     4.61     
13     5.01     4.76     
14     5.02     5.06     
15     5.04     5.25     
16     5.05     5.48     
17     5.08     5.84     
18     6     6.21     
19     6.02     6.34     
20     6.04     6.75

Similar Messages

  • Level1 backup is taking more time than Level0

    The Level1 backup is taking more time than Level0, I really am frustated how could it happen. I have 6.5GB of database. Level0 took 8 hrs but level1 is taking more than 8hrs . please help me in this regard.

    Ogan Ozdogan wrote:
    Charles,
    By enabling the block change tracking will be indeed faster than before he have got. I think this does not address the question of the OP unless you are saying the incremental backup without the block change tracking is slower than a level 0 (full) backup?
    Thank you in anticipating.
    OganOgan,
    I can't explain why a 6.5GB level 0 RMAN backup would require 8 hours to complete (maybe a very slow destination device connected by 10Mb/s Ethernet) - I would expect that it should complete in a couple of minutes.
    An incremental level 1 backup without a block change tracking file could take longer than a level 0 backup. I encountered a good written description of why that could happen, but I can't seem to locate the source at the moment. The longer run time might have been related to the additional code paths required to constantly compare the SCN of each block, and the variable write rate which may affect some devices, such as a tape device.
    A paraphrase from the book "Oracle Database 10g RMAN Backup & Recovery"
    "Incremental backups must check the header of each block to discover if it has changed since the last incremental backup - that means an incremental backup may not complete much faster than a full backup."
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Update query in sql taking more time

    Hi
    I am running an update query which takeing more time any help to run this fast.
    update arm538e_tmp t
    set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
    where m.vndr#=t.vndr#
    and m.cust_type_cd=t.cust_type
    and m.cust_type_cd<>13
    and m.yymm between 201301 and 201303
    group by m.vndr#,m.cust_type_cd;
    help will be appreciable
    thank you

    This is the Reports forum. Ask this in the SQL and PL/SQL forum.

  • CatSearch taking more time than full table scan

    Hi
    I have a table which has close to 140 million records. I had been exploring the option of using oracle text for search. So , I created an index(ctxcat) on the column Name by the following query.
    begin
         ctx_ddl.create_preference('FT_WL', 'BASIC_WORDLIST');
         ctx_ddl.set_attribute ('FT_WL', 'prefix-index','TRUE');
    end;
    create index history_namex on history(name) indextype is ctxsys.ctxcat parameters ('WORDLIST FT_WL');
    But when I executed the following query , I have found out that catsearch is taking more time. The queries and thier stats are :-
    1. select * from history where catsearch(name, 'Jake%', null) > 0 and rownum < 200;
    Elapsed : 00 : 00 : 00.13
    Statistics :
    112 recursive calls
    0 db block gets
    413 consistent gets
    28 physical reads
    0 redo size
    33168 bytes sent via SQL*Net to client
    663 bytes receuved via SQL*Net from client
    15 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    199 rows processed
    2. select * from history where name like 'Jake%' and rownum < 200;
    Elapsed : 00 : 00 : 00.05
    Statistics :
    1 recursive calls
    0 db block gets
    220 consistent gets
    383 physical reads
    0 redo size
    26148 bytes sent via SQL*Net to client
    663 bytes receuved via SQL*Net from client
    15 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    199 rows processed
    Can anyone explain why this is happening?
    PS : there is no conventional index on the name column.
    Edited by: 976934 on Dec 14, 2012 3:32 AM

    The asterisk (*) is simply the correct syntax for a wildcard using catsearch. If you use % instead, then you will not get the same results. Please see the section of the online documentation below, that shows that the asterisk is the wildcard for catsearch.
    http://docs.oracle.com/cd/E11882_01/text.112/e24436/csql.htm#CHDJBGHE
    Additionally, if you want to limit the rows, then you need to get the matching results in an inner sub-query, then use the rownum in an outer query. The way that you were doing it, it first limits the rows to the first 200, then checks which of those meet the criteria, instead of the other way around. So, the correct syntax should be the following, which should also be the most efficient.
    select * from  
           (select * from history
            where  catsearch (name, 'Jake*', null) > 0)
    where  rownum < 200;

  • Suddenly ODI scheduled executions taking more time than usual.

    Hi,
    I have set ODI packages scheduled for execution.
    From some days those are taking more time to execute themselves.
    Before they used to take 1 hr 30 mins approx.
    Now they are taking 3 - 3 hr 15 mins approx.
    And there no any major change in data in terms of Quantity.
    My ODI version s
    Standalone Edition Version 11.1.1
    Build ODI_11.1.1.3.0_GENERIC_100623.1635
    ODI packages are mainly using Oracle as SOURCE and TARGET DB.
    What things should i check to get to know reasons of sudden increase in time of execution.
    Any pointers regarding this would be appreciated.
    Thanks,
    Mahesh

    Mahesh,
    Use some repository queries to retrieve the session task timings and compare your slow execution to a previous acceptable execution, then look for the biggest changes - this will highlight where you are slowing down, then its off to tune the item accordingly.
    See here for some example reports , you might need to tweak for your current repos version but I dont think the table structures have changed that much :
    http://rnm1978.wordpress.com/2010/11/03/analysing-odi-batch-performance/

  • Cube content deletion is taking more time than usual.

    Hi Experts,
    We have a Process chain which ideally should run in every two hours. This chain has a delete data cube content step before the new data is loaded in the cube. This chain is running fine for one instance & the other instance is taking more time so it is quite intermittent.
    In the process chain we are also deleting contents from the Dimension tables (in the delete content step). Need your inputs to improve the performance of this step.
    Thanks & Regards
    Mayank Tyagi.

    Hi Mayank ,
    You can delete the indexes of the cube before deleting the contents of the cube . The concept is same as of data loading that data loads happens faster when indexes are deleted .
    If you are having aggregates over this cube , then that aggregate will be also adjusted .
    Kind Regards,
    Ashutosh Singh

  • Inserting timesten imdb cache is taking more time than insert in db11g

    Hi,
    I'm very new in Timesten imdb cache. I just recently installed imdb cache in an app server then installed oracle 11g in a db server and get to integrate them.
    So i wanted to test the performance of imdb cache .. i create an insert scripts that will write about 10K records.. I ran it in both imdb cache then did run directlly in oracle 11g. I get to see the results that Timesten is slower than oracle db..
    I followed the installation steps in oracle wesbsite.. my question.. what else should i consider to make imdb cache 10x faster than writing in oracle db??
    Hope you could help.
    John

    Also check the LOG_BUFFER_WAITS in the sys.monitor table (in ttisql, type command>monitor;) if that value increases after you run your application, then you need to increase the LogBufMB (logg Buffer Size in MBs) parameter value from the default to something higher.
    Other things you can do obviously is commit every x rows (something like 300 is typical), as opposed to committing at teh end of a large batch job.
    But the biggest impact of them all is if you run multiple jobs (multi-thread your application). And have as many direct-connected threads/processes processing the workload in parallel.
    We recommend as many direct connections/threads/processes as there are CPU cores on the machine minus one. So on a machine with 8 cores:
    Set LogBufParallelism to 7 in the ODBCINI file, then launch 7 threads/processes in parallel in your application, and you should see a major increase in throughput (transactions accomplished per second).
    Another big impacting item: separate the LogDir directory filesystem from the DataStore checkpoint files filesystem (LogDir directory can be defined in the ODBCINI file as well).
    Regards,
    -H

  • Row Insert in Timesten takes more time than Oracle

    Hi,
    I have a Timesten IMDB (11.2.1.8.0 (64 bit Linux/x86_64) with an underlying Oracle Database 11Gr2.
    Sys.odbc.ini entry is :
    [DSN_NAME]
    Driver=/application/TimesTen/matrix/lib/libtten.so
    DataStore=/application/TimesTen/DSN_NAME_datastore/DSN_NAME_DS_DIR
    LogDir=/logs_timeten/DSN_NAME_logdir
    PermSize=8000
    TempSize=250
    PLSQL=1
    DatabaseCharacterSet=WE8MSWIN1252
    OracleNetServiceName=DBNAME
    Connections=500
    PassThrough=0
    SQLQueryTimeout=250
    LogBufMB=512
    LogFileSize=512
    LogPurge=1
    When I try to insert a simple row in a table in an asyc cache group in Timesten it takes 3 ms (it has 6 indexes on it). On removing 4 indexes the performance improves to 1 ms. However inserting the same row on Oracle (with 6 indexes) takes 1.2 ms.
    How can we improve the insert row performance in Timesten ? Kindly assist.
    Regards,
    Karan
    PS: During the test run, we monitored deadlocks and log buffer waits with the following query and both values never changed from zero
    select PERM_ALLOCATED_SIZE,PERM_IN_USE_SIZE,TEMP_ALLOCATED_SIZE,TEMP_IN_USE_SIZE,DEADLOCKS,LOG_FS_READS,LOG_FS_WRITES,LOG_BUFFER_WAITS from sys.monitor;
    Edited by: 853100 on Nov 2, 2012 4:19 AM

    This is not very efficient as the statement will require likely need to be parsed for each INSERT. Even a soft parse is very expensive compared to the cost of the actual INSERT.
    Can you try changing your code to something like the following just to evaluate the difference in performance. The object is to prepare the INSERT just once, outside of the INSERT loop and then execute the prepared INSERT many times passing the required input parameters. I'm not a Pro*C expert but an outline of the code looks something like this:
    char * ins1 = "               INSERT INTO ORDERS(
                        ORD_ORDER_NO             ,
                        ORD_SERIAL_NO            ,
                        ORD_SEM_SMST_SECURITY_ID,
                        ORD_BTM_EMM_MKT_TYPE    ,
                        ORD_BTM_BOOK_TYPE        ,
                        ORD_EXCH_ID              ,
                        ORD_EPM_EM_ENTITY_ID     ,
                        ORD_EXCH_ORDER_NO        ,
                        ORD_CLIENT_ID            ,
                        ORD_BUY_SELL_IND         ,
                        ORD_TRANS_CODE           ,
                        ORD_STATUS               ,
                        ORD_ENTRY_DATE           ,
                        ORD_ORDER_TIME           ,     
                        ORD_QTY_ORIGINAL         ,
                        ORD_QTY_REMAINING        ,
                        ORD_QTY_DISC             ,
                        ORD_QTY_DISC_REMAINING   ,
                        ORD_QTY_FILLED_TODAY     ,
                        ORD_ORDER_PRICE          ,
                        ORD_TRIGGER_PRICE        ,  
                        ORD_DISC_QTY_FLG         ,
                        ORD_GTC_FLG              ,
                        ORD_DAY_FLG              ,
                        ORD_IOC_FLG             ,
                        ORD_MIN_FILL_FLG        ,
                        ORD_MKT_FLG             ,
                        ORD_STOP_LOSS_FLG       ,
                        ORD_AON_FLG             ,
                        ORD_GOOD_TILL_DAYS      ,
                        ORD_GOOD_TILL_DATE     ,     
                        ORD_AUCTION_NO          ,
                        ORD_ACC_CODE            ,
                        ORD_UM_USER_ID          ,
                        ORD_MIN_FILL_QTY        ,
                        ORD_SETTLEMENT_DAYS     ,
                        ORD_COMPETITOR_PERIOD   ,
                        ORD_SOLICITOR_PERIOD    ,
                        ORD_PRO_CLIENT          ,
                        ORD_PARTICIPANT_TYPE    ,
                        ORD_PARTICIPANT_CODE    ,
                        ORD_COUNTER_BROKER_CODE ,
                        ORD_CUSTODIAN_CODE      ,
                        ORD_SETTLER             ,
                        ORD_REMARKS             ,
                        ORD_BSE_DELV_FLAG       ,
                        ORD_BSE_NOTICE_NUM      ,
                        ORD_ERROR_CODE          ,
                        ORD_EXT_CLIENT_ID       ,
                        ORD_SOURCE_FLG          ,
                        ORD_BUY_BACK_FLG        ,
                        ORD_RESERVE_FLG         ,
                        ORD_BSE_REMARK          ,
                        ORD_CARRY_FORWARD_FLAG  ,
                        ORD_ORDER_OFFON         ,
                        ORD_D2C1_FLAG           ,
                        ORD_FI_RETAIL_FLG       ,
                        ORD_OIB_INT_REF_ID      ,
                        ORD_BOB_BASKET_ORD_NO   ,
                        ORD_PRODUCT_ID          ,
                        ORD_OIB_EXEC_REPORT_ID   ,
                        ORD_BANK_DP_TXN_ID       ,
                        ORD_USERINFO_PROG        ,
                        ORD_BANK_CODE            ,
                        ORD_BANK_ACC_NUM         ,
                        ORD_DP_CODE              ,
                        ORD_DP_ACC_NUM           ,
                        ORD_SESSION_ORDER_TYPE   ,
                        ORD_ORDER_CC_SEQ         ,
                        ORD_RMS_DAEMON_STATUS    ,
                        ORD_GROUP_ID             ,
                        ORD_REASON_CODE          ,
                        ORD_REASON_DESCRIPTION   ,
                        ORD_SERIES_IND           ,
                        ORD_BOB_BASKET_TYPE  ,
                        ORD_ORIGINAL_TIME    ,
                        ORD_TRD_EXCH_TRADE_NO,     
                        ORD_MKT_PROT   ,
                        ORD_SETTLEMENT_TYPE      ,
                        ORD_SUB_CLIENT,
                             ORD_ALGO_OI_NUM,
                             ORD_FROM_ALGO_CLORDID,
                             ORD_FROM_ALGO_ORG_CLORDID
                   VALUES(
                        :lvar_ord_order_no       ,
                        :lvar_ord_serial_no     ,
                        ltrim(rtrim(:lvar_ord_sem_smst_security_id)),
                        ltrim(rtrim(:lvar_ord_btm_emm_mkt_type)),
                        ltrim(rtrim(:lvar_ord_btm_book_type)),
                        ltrim(rtrim(:lvar_ord_exch_id))  ,
                        decode(:lD2C1Flag,'N',ltrim(rtrim(:lvar_ord_epm_em_entity_id)),ltrim(rtrim(:sD2C1ControllerId)))  ,
                                   :insertExchOrderNo,
                        ltrim(rtrim(:lvar_ord_client_id))  ,
                        ltrim(rtrim(:lvar_ord_buy_sell_ind)),
                        :lvar_ord_trans_code,
                        :cTransitStatus      ,
                        sysdate,
                        sysdate,
                        :lvar_ord_qty_original,
                        decode(:lvar_ord_qty_remaining  ,-1,to_number(null),:lvar_ord_qty_remaining)    ,
                        decode(:lvar_ord_qty_disc       ,-1,to_number(null),:lvar_ord_qty_disc),
                        decode(:lvar_ord_qty_disc_remaining,-1,to_number(null),:lvar_ord_qty_disc_remaining),
                        :lvar_ord_qty_filled_today    ,
                        :lvar_ord_order_price,
                        decode(:lvar_ord_trigger_price  ,-1,to_number(null),:lvar_ord_trigger_price)     ,
                                   decode(:lvar_ord_disc_qty_flg ,-1,null,:lvar_ord_disc_qty_flg)  ,
                                   decode(:lvar_ord_gtc_flg ,-1,null,:lvar_ord_gtc_flg)  ,
                                   decode(:lvar_ord_day_flg ,-1,null,:lvar_ord_day_flg)  ,
                                   decode(:lvar_ord_ioc_flg ,-1,null,:lvar_ord_ioc_flg)  ,
                                   decode(:lvar_ord_min_fill_flg ,-1,null,:lvar_ord_min_fill_flg)  ,
                                   decode(:lvar_ord_mkt_flg ,-1,null,:lvar_ord_mkt_flg)  ,
                                   decode(:lvar_ord_stop_loss_flg ,-1,null,:lvar_ord_stop_loss_flg)  ,
                                   decode(:lvar_ord_aon_flg ,-1,null,:lvar_ord_aon_flg)  ,
                        decode(:lvar_ord_good_till_days ,-1,to_number(null),:lvar_ord_good_till_days),
                        to_date(ltrim(rtrim(:lvar_ord_good_till_date))  ,'dd-mm-yyyy'),
                        :lvar_ord_auction_no,
                        ltrim(rtrim(:lvar_ord_acc_code)),
                        ltrim(rtrim(:lv_UserIdOrLogPktId)),
                        decode(:lvar_ord_min_fill_qty,-1,to_number(null),:lvar_ord_min_fill_qty),
                        :lvar_ord_settlement_days,
                        :lvar_ord_competitor_period,
                        :lvar_ord_solicitor_period,
                        :lvar_ord_pro_client         ,
                        ltrim(rtrim(:lvar_ord_participant_type)),
                        ltrim(rtrim(:lvar_ord_participant_code)),
                        ltrim(rtrim(:lvar_ord_counter_broker_code)),
                        trim(:lvar_ord_custodian_code)     ,
                        ltrim(rtrim(:lvar_ord_settler)),
                        ltrim(rtrim(:lvar_ord_remarks)),
                        ltrim(rtrim(:lvar_ord_bse_delv_flag))      ,
                        ltrim(rtrim(:lvar_ord_bse_notice_num))     ,
                        :lvar_ord_error_code         ,
                        trim(:lvar_ord_ext_client_id)      ,
                        ltrim(rtrim(:lvar_ord_source_flg)),
                        ltrim(rtrim(:lvar_ord_buyback_flg)),
                        :lvar_ord_reserve_flag        ,
                        trim(:lvar_ord_bse_remark)         ,
                        ltrim(rtrim(:lvar_ord_carryfwd_flg)),
                        :cOnStatus,
                        :lD2C1Flag,
                        :lSendToRemoteUser,
                        :lInternalRefId,
                        :lvar_bob_basket_ord_no,
                        ltrim(rtrim(:lvar_ord_product_id)),
                        trim(:lvar_ord_oib_exec_report_id)   ,
                        :lvar_BankDpTxnId  ,
                        ltrim(rtrim(:lEquBseUserCode )),
                        ltrim(rtrim(:lvar_BankCode))  ,
                        ltrim(rtrim(:lvar_BankAccNo)),
                        ltrim(rtrim(:lvar_DPCode)),
                        ltrim(rtrim(:lvar_DPAccNo))  ,
                        ltrim(rtrim(:lvar_OrderSessionType))   ,
                        :lvar_ord_order_cc_seq,
                        :lvar_ord_rms_daemon_status    ,
                        :lvarGrpId,
                        :lvar_ord_reason_code          ,
                        trim(:lvar_ord_reason_description)   ,
                        :lSecSeriesInd,
                        ltrim(rtrim(:lBasketType)),
                        sysdate,
                        (-1 * :lvar_ord_serial_no),
                        :MktProt ,          
                        :lvar_ord_sett_type,
                        ltrim(rtrim(:lvar_ca_cli_type)) ,
                                   :ComplianceID,
                                   ltrim(rtrim(:lvar_ClOrd)),
                                   ltrim(rtrim(:lvar_OrgClOrd))
    EXEC SQL AT :db_conn PREPARE i1 FROM :ins1;
         logTimestamp("BEFORE inserting in  orders table");
    for (i=0; i<NUM_INSERTS; i++)
              if ( strncmp(lvar_ord_exch_id.arr,"NSE",3) ==0 )
                   if(tmpExchOrderNo == -1)
                        insertExchOrderNo = NULL;
                   else
                        insertExchOrderNo = tmpExchOrderNo;
              else if ( strncmp(lvar_ord_exch_id.arr,"BSE",3) ==0 )
                   if(tmpExchOrderNo == -1)
                        insertExchOrderNo = NULL;
                   else
                        insertExchOrderNo = tmpExchOrderNo;
                lvar_ord_acc_code.len = strlen (lvar_ord_acc_code.arr);
              sprintf (lv_UserIdOrLogPktId.arr,"%d",UserIdOrLogPktId );
              lv_UserIdOrLogPktId.len = strlen (lv_UserIdOrLogPktId.arr) ;
              lEquBseUserCode.len = fTrim(lEquBseUserCode.arr,16);
              lvar_ord_buyback_flg.len = fTrim(lvar_ord_buyback_flg.arr,1);
              lvar_ord_exch_id.len = fTrim(lvar_ord_exch_id.arr,3);
                   EXEC SQL AT :db_conn EXECUTE i1 USING
                        :lvar_ord_order_no       ,
                        :lvar_ord_serial_no     ,
                        :lvar_ord_sem_smst_security_id,
                        :lvar_ord_btm_emm_mkt_type,
    etc. ;
              logTimestamp("AFTER inserting in  orders table");
    /* Divide reported time by NUM_INSERTS to get average time for one insert */
    Chris

  • RMAN backup taking more time than usual suddenly

    Hi All,
    We are using 11.1.0.7 database, We regularly takes the full level 0 incremental backup which generally tooks 4:30 hours to complete but from last 2-3 days it is taking 6 hours /or more to complete. We did not made any parameter changes or script change in the database.
    Below are the details of rman :
    RMAN> show all;
    RMAN configuration parameters for database with db_unique_name OLAP are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK;
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'f:\backup
    CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
    CONFIGURE CHANNEL DEVICE TYPE DISK MAXOPENFILES 2;
    CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 4 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 5 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 6 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 7 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 8 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
    CONFIGURE MAXSETSIZE TO UNLIMITED;
    CONFIGURE ENCRYPTION FOR DATABASE OFF;
    CONFIGURE ENCRYPTION ALGORITHM 'AES128';
    CONFIGURE COMPRESSION ALGORITHM 'BZIP2';
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'f:\backup\OLAP\SNCFOLAP.ORA';
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'F:\BACKUP\OLAP\SNCFOLAP.ORA';
    =====================================================
    Please help me, as this more time make my schedule task to extend.
    Thanks
    Sam

    sam wrote:
    Hi All,
    We are using 11.1.0.7 database, We regularly takes the full level 0 incremental backup which generally tooks 4:30 hours to complete but from last 2-3 days it is taking 6 hours /or more to complete. We did not made any parameter changes or script change in the database.could be due to change in server load,
    please compare server load(cpu/memory) on above two period.

  • Wait Step is taking more time than specified

    Hi All,
    I have included wait step for 1 minute in BPM. But it is taking more than 30 minutes instead of 1 minute.
    Could any one help me on this?
    Help will be rewarded.
    Thanks & Regards,
    Jyothi.

    Hi,
    Chech if all the jobs are scheduled correctly in SWF_XI_CUSTOMIZING if u find any jobs in error status just perform an Automatic BPM Customizing.
    refer my wiki... [BPM Trouble Shooting Deadline Step|https://wiki.sdn.sap.com/wiki/display/XI/BPMTroubleShootingDeadlineStep]
    ~SaNv...

  • Imac taking more time than usual to boot and Bluetooth functionality is not available

    My IMac (21" 10.9.1 OS X 2.5GHZ Intel Core i5) is taking much time to boot. I am seeing this happening from past one week. I am also facing Bluetooth issue. My bluetooth Keyboard and Mouse are not working. So I had to connect with normal USB Keyboard and Mouse and when I logged in, I am able to see that the bluetooth functionality is disabled. It is saying Bluetooth not available.
    I tried to do Repair Disk permissions (on Disk Utility) and also tried PRAM reset. Both did not work.
    Do anyone faced the same isse? Question here is.
    1) Can anyone help me to resolve this issue?.
    2) What caused this issue all of a sudden, I dont remember installing any softwares recently.
    Thanks in advance.

    Hi,
    Below are the details, that I have taken from Etrecheck. Can you pls have a look into this..
    Hardware Information:
              iMac (21.5-inch, Mid 2011)
              iMac - model: iMac12,1
              1 2.5 GHz Intel Core i5 CPU: 4 cores
              4 GB RAM
    Video Information:
              AMD Radeon HD 6750M - VRAM: 512 MB
    System Software:
              OS X 10.9.1 (13B42) - Uptime: 0 days 0:6:52
    Disk Information:
              ST3500418AS disk0 : (500.11 GB)
                        EFI (disk0s1) <not mounted>: 209.7 MB
                        Mcintosh (disk0s2) /: 499.25 GB (61.48 GB free)
                        Recovery HD (disk0s3) <not mounted>: 650 MB
              OPTIARC DVD RW AD-5690H 
    USB Information:
              Apple Inc. FaceTime HD Camera (Built-in)
              USB USB Keyboard
              DragonRise Inc.   Generic   USB  Joystick  
              Logitech USB Optical Mouse
              Apple Computer, Inc. IR Receiver
              Apple Internal Memory Card Reader
    FireWire Information:
    Thunderbolt Information:
              Apple Inc. thunderbolt_bus
    Kernel Extensions:
              com.orderedbytes.driver.CMUSBDevices          (4.6.0 - SDK 10.6)
              com.orderedbytes.driver.ControllerMateFamily          (4.6.0 - SDK 10.6)
    Startup Items:
              HWNetMgr: Path: /Library/StartupItems/HWNetMgr
              HWPortDetect: Path: /Library/StartupItems/HWPortDetect
    Problem System Launch Daemons:
    Problem System Launch Agents:
    Launch Daemons:
              [System] com.adobe.fpsaud.plist 3rd-Party support link
              [System] com.google.keystone.daemon.plist 3rd-Party support link
              [System] com.microsoft.office.licensing.helper.plist 3rd-Party support link
              [System] com.oracle.java.Helper-Tool.plist 3rd-Party support link
              [System] com.oracle.java.JavaUpdateHelper.plist 3rd-Party support link
    Launch Agents:
              [System] com.google.keystone.agent.plist 3rd-Party support link
              [System] com.oracle.java.Java-Updater.plist 3rd-Party support link
              [System] com.orderedbytes.ControllerMateHelper.plist 3rd-Party support link
    User Launch Agents:
              [not loaded] com.facebook.videochat.[redacted].plist 3rd-Party support link
    User Login Items:
              iTunesHelper
              USBOverdriveHelper
              VMware Fusion Start Menu
    Internet Plug-ins:
              FlashPlayer-10.6: Version: 11.9.900.117 - SDK 10.6 3rd-Party support link
              Default Browser: Version: 537 - SDK 10.9
              Flash Player: Version: 11.9.900.117 - SDK 10.6 Outdated! Update
              QuickTime Plugin: Version: 7.7.3
              o1dbrowserplugin: Version: 5.1.4.17398 3rd-Party support link
              SharePointBrowserPlugin: Version: 14.0.0 3rd-Party support link
              npgtpo3dautoplugin: Version: 0.1.44.29 - SDK 10.5 3rd-Party support link
              googletalkbrowserplugin: Version: 5.1.4.17398 3rd-Party support link
              JavaAppletPlugin: Version: Java 7 Update 45 Outdated! Update
    Audio Plug-ins:
              BluetoothAudioPlugIn: Version: 1.0 - SDK 10.9
              AirPlay: Version: 1.9 - SDK 10.9
              AppleAVBAudio: Version: 2.0.0 - SDK 10.9
              iSightAudio: Version: 7.7.3 - SDK 10.9
    User Internet Plug-ins:
              Unity Web Player: Version: UnityPlayer version 2.6.1f3 3rd-Party support link
              RealPlayer Plugin: Version: Unknown
              Google Earth Web Plug-in: Version: 7.1 3rd-Party support link
    3rd Party Preference Panes:
              Flash Player  3rd-Party support link
              Java  3rd-Party support link
    Bad Fonts:
              None
    Old Applications:
              Wondershare Helper Compact:          Version: 2.2.6.0 - SDK 10.5 3rd-Party support link
                        /Applications/Wondershare Helper compact/Wondershare Helper Compact.app
              PwnageTool:          Version: 5.1.1 - SDK 10.4 3rd-Party support link
                        /Users/Shared/Sowndar/iPhone/PwnageTool.app
              VLC:          Version: 2.0.1 - SDK 10.5 3rd-Party support link
    Time Machine:
              Time Machine not configured!
    Top Processes by CPU:
                   6%          mds
                   3%          mds_stores
                   0%          EtreCheck
                   0%          WindowServer
                   0%          SystemUIServer
    Top Processes by Memory:
              143 MB          com.apple.IconServicesAgent
              102 MB          mds_stores
              90 MB          softwareupdated
              70 MB          sandboxd
              61 MB          com.apple.WebKit.WebContent
    Virtual Memory Information:
              548 MB          Free RAM
              1.53 GB          Active RAM
              1.12 GB          Inactive RAM
              826 MB          Wired RAM
              1.53 GB          Page-ins
              0 B          Page-outs

  • Connection through jdbc thin client taking more time than from sqlplus!!!

    Hello All
    Machines A and B
    Applicaion in A is connecting to B(9.2.0.6),db server.
    The schema is so small with few tables and data in each table less than 500 rows
    We are in the process of migrating the Application Schema in B to C[9.2.0.8].
    But the response time is more when the application fetches from C.
    Even while selecting the sysdate from dual.
    The application is using the jdbc thin client for fetching the data.
    When the same sql is executed by (from A to C)
    sqlplus -s user/pass @execute.sql, its gets done in a fraction of a second.
    But when the same is done through the application which uses jdbc thin client, it takes few seconds
    to complete.
    When tried with a small java program that uses classes12.jar (from A to C)
    conn = DriverManager.getConnection(URL,UID,PASS);
                   stop = System.currentTimeMillis();
                   System.out.println("Connection time in milli sec: " + (stop - start));
                   System.out.println();
    ..It was found that creating the connection was taking time.
    But the same is not happening when tired through the sqlplus
    Could someone throw some light into this?
    What could be the reason for jdbc to get slower while establishing connections?
    TIA,
    JJ

    are you using latest drivers - http://www.oracle.com/technology/tech/java/sqlj_jdbc/index.html
    you may want to check some options reducing jdbc connection cost from the otn samples - http://www.oracle.com/technology/sample_code/tech/java/sqlj_jdbc/index.html

  • Concurrent request taking more time than usual

    HI,
    I am running a concurrent request : Payables transfer to general Ledger
    which was usually getting completed within 5 minutes....but now it is taking hours to complete....Log file not showing any errors as such...
    I increased the standard mangers processes and bounce the Concurrent managers n Database as well...i also ran cmclean.sql, but no luck..
    Can anybody suggest me the approach for this problem...
    thanks n regards,
    Sajad

    Check the enable trace option in the concurrent program definitions.
    After that run the concurrent request and get the trace file in udump directory and tkprof it, and analyze which is the query that is causing the issue.

  • RMAN cold backup taking more time than usual

    Hi everybody.. please help me resolving below issue..
    I have configured the RMAN in one of the production database with separate catalogue database six months earlier. I have sceduled weekly cold backup through RMAN on sunday 6pm. Normally it used to take one hour to complete the cold backup and database goes down soon as the job starts.
    But since from then every week the time taken to just initiate the database shutdown continuosly increasing and recently when i checked it is taking 1 hour to initiate the database shutdown. Once the initiation starts it hardly take 1 to 3 min to shutdown.
    Database is up and running during that one Hour. I was in the assumption that RMAN takes some time to execute its internal packages.
    Please help
    Regards,
    Arun Kumar

    Hi John and Tychos,
    Thank you very much for your valuable inputs.
    Yesterday there was cold backup and i have monitored the CPU usage. But there was no load on the CPU at that time and CPU usage was 0%
    I have tried connecting to RMAN manually and it connects within a second. And also noticed in prstat -a that rman connects as soon as the job starts.
    So i think that its taking time at the deleting obsolete backups.
    But I have observerd following observation.
    Before executing the delete obsolete command as mentioned before
    RMAN> REPORT OBSOLETE RECOVERY WINDOW OF 35 DAYS DEVICE TYPE 'SBT_TAPE';
    Report of obsolete backups and copies
    Type Key Completion Time Filename/Handle
    Backup Set 83409 25-JUL-09
    Backup Piece 83587 25-JUL-09 arc_SID_20090725_557_1
    Backup Set 83410 25-JUL-09
    Backup Piece 83588 25-JUL-09 arc_SID_20090725_558_1
    Backup Set 83411 25-JUL-09
    Backup Piece 83589 25-JUL-09 arc_SID_20090725_559_1
    After executing the delete obsolete command
    RMAN> REPORT OBSOLETE RECOVERY WINDOW OF 35 DAYS DEVICE TYPE 'SBT_TAPE';
    Report of obsolete backups and copies
    Type Key Completion Time Filename/Handle
    Backup Set 83409 25-JUL-09
    Backup Piece 83587 25-JUL-09 arc_SID_20090725_557_1
    Backup Set 83410 25-JUL-09
    Backup Piece 83588 25-JUL-09 arc_SID_20090725_558_1
    Backup Set 83411 25-JUL-09
    Backup Piece 83589 25-JUL-09 arc_SID_20090725_559_1
    Please advice me on the following.
    1. Why its not deleting the obsolete BACKUP SETS?
    2. Is it normal that RMAN takes this much deleting obsolete backup sets? How can i minimize the time taking for deleting obsolete files.
    Thanks and Regards,
    Arun Kumar

  • BW Job Taking more time than normal execution time

    Hi,
    Customer is trying to extract data from R/3 with BW OCPA Extractor.
    Selections within Infopackage under tab Data Selection are
    0FISCPER = 010.2000
    ZVKORG = DE
    Then it is scheduled (Info Package), after this monitor button is selected for the scheduled date and time, which gathers information from R/3 system is taking approximately 2 hours which used to take normally minutes.
    This is pulling data from R/3 which is updating PSA, the concern is the time taken for pulling data from R/3 to BW.
    If any input is required please let me know and the earliest solution is appreciated.
    Thanks
    Vijay

    Hi Vijay,
    If you think the data transfer is the problem (i.e. extractor runs for a long time), try and locate the job on R/3 side using SM37 (user ALEREMOTE) or look in SM50 to see of extraction is still running.
    You can also test the extraction in R/3 using tcode RSA3 and use same selection-criteria.
    If this goes fast (as expected) the problem must be on BW side. Another thing you can do is check if a shortdump occurred in either R/3 or BW -> tcode ST22. This will often keep the traffic light on yellow.
    Hope this helps to solve your problem.
    Grtx,
    Marco

Maybe you are looking for

  • Query works in SQL Developer but not in APEX

    The query below runs fine in SQL Developer. I have tried entering it both as a Report Region based on SQL Query and in a Dynamic PL/SQL Region. I get different error messages in each. In the Dynamic PL/SQL Region I get the error that an INTO clause i

  • CX_SY_CONVERSION_OVERFLOW DUMP IN APD

    Hi everybody, By executing an APD I´m getting a DUMP with the following information Err.exec. time         CONVT_OVERFLOW                                                                Excep.                 CX_SY_CONVERSION_OVERFLOW                 

  • Ical server invitations to external email adresses are sent from "user"@localhost.localdomain

    Hello, I have quite a bad time getting the ical server in OSX 10.7 server to work correctly. If I invite a co-worker who also has an account on the server, everything seems to be fine. When I invite someone with an external email address, an email is

  • Genral question on equipment and functional location....

    hi, what I have to learn on functional location and equipment other than the below concepts creating euipment and installation, and customizing like screen sequences of Fun.  locations and equipment.... and field selection for functional locations an

  • MacBook not receiving AOL emails

    My macbookpro stopped receiving and sending emails through AOL about 4 days ago. I ca still get emails through other servers and I can get my AOL mail on my iPhone, but my MacBook just sits there with the circle (I'm busy sign) continuously.