Query Formating Time is more
Dear Friends,
I have one stock report, it is genarally executs with in 10 to 15 mints, but now it is taking too much time to give output, it is taking more time for Formating the report. I created Indexs and DB stats for that cube, so please give me some solution for this problem.
Thanks
Ganga
Hi Ganga,
Check this might be useful:
http://help.sap.com/saphelp_nw04s/helpdata/en/a0/2a183d30805c59e10000000a114084/frameset.htm
http://help.sap.com/saphelp_nw04s/helpdata/en/57/b10022e849774f9961aa179e8763b6/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
http://help.sap.com/saphelp_nw04s/helpdata/en/2e/caceae8dd08e48a63c2da60c8ffa5e/frameset.htm
Hope this helps............
Rgs,
I.R.K
Similar Messages
-
Query response time takes more time when calling from package
SELECT
/* UTILITIES_PKG.GET_COUNTRY_CODE(E.EMP_ID,E.EMP_NO) COUNTRY_ID */
(SELECT DISTINCT IE.COUNTRY_ID
FROM DOCUMENT IE
WHERE IE.EMP_ID =E.EMP_ID
AND IE.EMP_NO = E.EMP_NO
AND IE.STATUS = 'OPEN' ) COUNTRY_ID
FROM EMPLOYEE E
CREATE OR REPLACE PACKAGE BODY UTILITIES_PKG AS
FUNCTION GET_COUNTRY_CODE(P_EMP_ID IN VARCHAR2, P_EMP_NO IN VARCHAR2)
RETURN VARCHAR2 IS
L_COUNTRY_ID VARCHAR2(25) := '';
BEGIN
SELECT DISTINCT IE.COUNTRY_ID
INTO L_COUNTRY_ID
FROM DOCUMENT IE
WHERE IE.EMP_ID = P_EMP_ID
AND IE.EMP_NO = P_EMP_NO
AND IE.STATUS = 'OPEN';
RETURN L_COUNTRY_ID;
EXCEPTION
WHEN OTHERS THEN
RETURN 'CONT';
END;
END UTILITIES_PKG;
when I run above query its coming in 1.2 seconds.but when comment subquery and call from package its taking 9 seconds.query returns more than 2000 records.i am not able to find the reason why it is taking more time when calling from package?You are getting a different plan when you run it as PL/SQL most likely. Comment your statement:
SELECT /* your comment here */then find them in V$SQL and get the SQL IDs. You can then use DBMS_XPLAN.DISPLAY_CURSOR to see what is actually happening.
http://www.psoug.org/reference/dbms_xplan.html -
Query in timesten taking more time than query in oracle database
Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE DATE,
IF_SYSTEM VARCHAR2(32) NOT NULL,
GROUPING_ID TT_BIGINT,
TIME_DIM_ID TT_INTEGER NOT NULL,
BUSINESS_DIM_ID TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID TT_INTEGER NOT NULL,
INSTR_DIM_ID TT_INTEGER NOT NULL,
EXECUTION_DIM_ID TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS TT_BIGINT,
FILLED_QUANTITY TT_BIGINT,
CNT_FILLED_QUANTITY TT_BIGINT,
QUANTITY TT_BIGINT,
CNT_QUANTITY TT_BIGINT,
COMMISSION BINARY_FLOAT,
CNT_COMMISSION TT_BIGINT,
FILLS_NUMBER TT_BIGINT,
CNT_FILLS_NUMBER TT_BIGINT,
AGGRESSIVE_FILLS TT_BIGINT,
CNT_AGGRESSIVE_FILLS TT_BIGINT,
NOTIONAL BINARY_FLOAT,
CNT_NOTIONAL TT_BIGINT,
TOTAL_PRICE BINARY_FLOAT,
CNT_TOTAL_PRICE TT_BIGINT,
CANCELLED_ORDERS_COUNT TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT TT_BIGINT,
ROUTED_ORDERS_NO TT_BIGINT,
CNT_ROUTED_ORDERS_NO TT_BIGINT,
ROUTED_LIQUIDITY_QTY TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY TT_BIGINT,
REMOVED_LIQUIDITY_QTY TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY TT_BIGINT,
ADDED_LIQUIDITY_QTY TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY TT_BIGINT,
AGENT_CHARGES BINARY_FLOAT,
CNT_AGENT_CHARGES TT_BIGINT,
CLEARING_CHARGES BINARY_FLOAT,
CNT_CLEARING_CHARGES TT_BIGINT,
EXECUTION_CHARGES BINARY_FLOAT,
CNT_EXECUTION_CHARGES TT_BIGINT,
TRANSACTION_CHARGES BINARY_FLOAT,
CNT_TRANSACTION_CHARGES TT_BIGINT,
ORDER_MANAGEMENT BINARY_FLOAT,
CNT_ORDER_MANAGEMENT TT_BIGINT,
SETTLEMENT_CHARGES BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES TT_BIGINT,
RECOVERED_AGENT BINARY_FLOAT,
CNT_RECOVERED_AGENT TT_BIGINT,
RECOVERED_CLEARING BINARY_FLOAT,
CNT_RECOVERED_CLEARING TT_BIGINT,
RECOVERED_EXECUTION BINARY_FLOAT,
CNT_RECOVERED_EXECUTION TT_BIGINT,
RECOVERED_TRANSACTION BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION TT_BIGINT,
RECOVERED_ORD_MGT BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT TT_BIGINT,
RECOVERED_SETTLEMENT BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT TT_BIGINT,
CLIENT_AGENT BINARY_FLOAT,
CNT_CLIENT_AGENT TT_BIGINT,
CLIENT_ORDER_MGT BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT TT_BIGINT,
CLIENT_EXEC BINARY_FLOAT,
CNT_CLIENT_EXEC TT_BIGINT,
CLIENT_TRANS BINARY_FLOAT,
CNT_CLIENT_TRANS TT_BIGINT,
CLIENT_CLEARING BINARY_FLOAT,
CNT_CLIENT_CLEARING TT_BIGINT,
CLIENT_SETTLE BINARY_FLOAT,
CNT_CLIENT_SETTLE TT_BIGINT,
CHARGEABLE_TAXES BINARY_FLOAT,
CNT_CHARGEABLE_TAXES TT_BIGINT,
VENDOR_CHARGE BINARY_FLOAT,
CNT_VENDOR_CHARGE TT_BIGINT,
ROUTING_CHARGES BINARY_FLOAT,
CNT_ROUTING_CHARGES TT_BIGINT,
RECOVERED_ROUTING BINARY_FLOAT,
CNT_RECOVERED_ROUTING TT_BIGINT,
CLIENT_ROUTING BINARY_FLOAT,
CNT_CLIENT_ROUTING TT_BIGINT,
TICKET_CHARGES BINARY_FLOAT,
CNT_TICKET_CHARGES TT_BIGINT,
RECOVERED_TICKET_CHARGES BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns ORACLE TimesTen
1 1.05 0.94
2 1.07 1.47
3 2.04 1.8
4 2.06 2.08
5 2.09 2.4
6 3.01 2.67
7 4.02 3.06
8 4.03 3.37
9 4.04 3.62
10 4.06 4.02
11 4.08 4.31
12 4.09 4.61
13 5.01 4.76
14 5.02 5.06
15 5.04 5.25
16 5.05 5.48
17 5.08 5.84
18 6 6.21
19 6.02 6.34
20 6.04 6.75 -
Mac Formatted Time Capsule w/ PC Formatted External Drive Via USB?
I use a Mac-formatted Time Capsule to run Time Machine backups on my MacBook Pro. My wife backs up her PC laptop on a PC-formatted Western Digital "My Book" hard drive which is powered via A/C wall adapter. I connected the Western Digital PC drive to the Time Capsule via USB so it would be available on our home network and my wife could run her PC backups wirelessly, BUT the drive is not showing up on our network. Anybody have any idea what I'm doing wrong?
Thanks in advance!
---TimMy questions continue below in red. Thank you for your patience.
LaPastenague wrote:
Tsac77 wrote:
Thanks for the in depth clarification. A few follow up questions:
1) Are you saying that even though I am only using the TC (in this instance) to make the windows drive available on the network, the drive still needs to be formatted in HFS+?
HFS+ or FAT32.. the later having severe issues with large disk and large files.
Great. I'll do HFS+
2) If I format the windows drive in HFS+, will the windows laptop be immediately able to write/read to it over the network? Or will I have to install some sort of program on the windows laptop that allows windows to read/write to HFS+ (which I've always undewrstood to be a mac only format)?
The format of the disk connected to a NAS is totally irrelevant.. if it offers windows networking file protocol.. ie SMB then it can store files on the hard disk.. it is just like sharing the disk in the Mac.. you can write to that directly from the PC. Try it.. The disk is controlled by the Mac. But it presents SMB to the Network.. On your Mac, go to system preferences.. sharing.. it should be in Internet and wireless.. but on Lion I have no idea. Open file sharing which should be ticked..there should be a public folder by default.. you can add others.. go to options, turn on SMB.. then from the windows computer you should see the disk available in networking. If not fix the names to SMB standard.. you will need to do this on the TC as well.. short, no spaces.. pure alphanumeric names. Copy some files from windows to your Mac.. you have now copied to a network drive.. with the drive format totally irrelevant to your windows computer.
I'm sorry, but I don't know what "windows networking file protocol" and "SMB" mean. Also, when you say "NAS" are you referring to the Time Capsule or another device? Can you please explain things like "The disk is controlled by the Mac but it presents SMB to the network" in a less advanced way. Pretend I'm a child. You won't be far off. It seems like this technique you're explaining is a way for the windows PC to write to the external drive via my MacBook. Is that even close to correct? The setup I must create needs to work regardless if macbook is on/off/out of the house.
3) Does plugging a USB drive into the TC never work properly or is this just a semi-common problem that happens sometimes? My odds of success will probably dictate whether I try this or not. I already own the windows USB drive and would like to avoid buying more hardware if possible.
It is problematic enough that I recommend you try it before committing to it. Even if you have to copy the files off to another location and then reformat the drive. Test it for a couple of days.. the issue happens especially when you are using Lion.. but will also happen with windows. Especially when the disk spins down it will not spin up again and become available to the network.
Got it.
4) What do you mean by files being in "native format"? And what does a dead TC have to do with whether or not they can be recovered to the windows laptop? If the TC dies, can't I just plug the windows USB drive directly into the windows laptop and it will be the same thing as connecting the drive and laptop via network? How does a dead TC make things more difficult?
Thanks for your continued help.
HFS+ is not windows native format... so files stored by the TC on the USB drive.. when the TC dies.. they all die eventually may be difficult to recover unless you plug it into another TC.. as the PC cannot natively read them. If you format FAT32 then the windows computer can read it.. just there are other issues. eg the disk can be slow.. very slow and it has far less protection. eg a power failure can corrupt the drive.. this was an issue before we moved in windows to NTFS.
Why do you say "files stored BY THE TC on the USB drive" Why is the TC storing files on the USB drive? Isn't plugging the USB into the TC just making the USB available on the network to my windows laptop? Why is the TC storing files? -
We have this query which is taking a long time to execute. From the explain plan what i found out is there is a full table scan going on W_GL_OTHER_F. Please help in identifying the problem area and solutions.
The query is,
select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4,
D1.c5 as c5,
D1.c6 as c6,
D1.c7 as c7,
D1.c8 as c8
from
(select distinct D1.c2 as c1,
D1.c3 as c2,
D1.c4 as c3,
D1.c5 as c4,
D1.c6 as c5,
D1.c7 as c6,
D1.c8 as c7,
D1.c1 as c8,
D1.c5 as c9
from
(select sum(case when T324628.OTHER_DOC_AMT is null then 0 else T324628.OTHER_DOC_AMT end ) as c1,
T91397.GL_ACCOUNT_NUM as c2,
T149255.SEGMENT_VAL_CODE as c3,
T148908.SEGMENT_VAL_DESC as c4,
T148543.HIER4_CODE as c5,
T148543.HIER4_NAME as c6,
T91707.ACCT_DOC_NUM as c7,
T91707.X_LINE_DESCRIPTION as c8
from
W_GL_OTHER_F T91707 /* Fact_W_GL_OTHER_F */ ,
W_GL_ACCOUNT_D T91397 /* Dim_W_GL_ACCOUNT_D */ ,
W_STATUS_D T96094 /* Dim_W_STATUS_D_Generic */ ,
WC_GL_OTHER_F_MV T324628 /* Fact_WC_GL_OTHER_MV */ ,
W_GL_SEGMENT_D T149255 /* Dim_W_GL_SEGMENT_D_Segment1 */ ,
W_GL_SEGMENT_D T148937 /* Dim_W_GL_SEGMENT_D_Segment3 */ ,
W_HIERARCHY_D T148543 /* Dim_W_HIERARCHY_D_Segment3 */ ,
W_GL_SEGMENT_D T148908 /* Dim_W_GL_SEGMENT_D_Segment2 */
where ( T91397.ROW_WID = T91707.GL_ACCOUNT_WID and T91707.DOC_STATUS_WID = T96094.ROW_WID and T96094.ROW_WID = T324628.DOC_STATUS_WID and T148543.HIER_CODE = T148937.SEGMENT_LOV_ID and T148543.HIER20_CODE = T148937.SEGMENT_VAL_CODE and T324628.DELETE_FLG = 'N' and T324628.X_CURRENCY_CODE = 'CAD' and T148543.HIER4_CODE <> '00000000000' and T91397.RECON_TYPE_CODE is not null and T91397.ROW_WID = T324628.GL_ACCOUNT_WID and T91397.ACCOUNT_SEG3_CODE = T148937.SEGMENT_VAL_CODE and T91397.ACCOUNT_SEG3_ATTRIB = T148937.SEGMENT_LOV_ID and T91397.ACCOUNT_SEG2_CODE = T148908.SEGMENT_VAL_CODE and T91397.ACCOUNT_SEG2_ATTRIB = T148908.SEGMENT_LOV_ID and T91397.ACCOUNT_SEG1_CODE = T149255.SEGMENT_VAL_CODE and T91397.ACCOUNT_SEG1_ATTRIB = T149255.SEGMENT_LOV_ID and (T96094.W_STATUS_CODE in ('POSTED', 'REVERSED')) and T91397.GL_ACCOUNT_NUM like '%98%' )
group by T91397.GL_ACCOUNT_NUM, T91707.ACCT_DOC_NUM, T91707.X_LINE_DESCRIPTION, T148543.HIER4_CODE, T148543.HIER4_NAME, T148908.SEGMENT_VAL_DESC, T149255.SEGMENT_VAL_CODE
) D1
) D1
order by c1, c2, c3, c4, c5, c6, c7The plan is,
PLAN_TABLE_OUTPUT
Plan hash value: 3196636288
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Psto
| 0 | SELECT STATEMENT | | 810K| 306M| | 266K (1)| 01:20:03 | | |
| 1 | HASH GROUP BY | | 810K| 306M| 320M| 266K (1)| 01:20:03 | | |
|* 2 | HASH JOIN | | 810K| 306M| 38M| 239K (1)| 01:11:56 | | |
|* 3 | MAT_VIEW ACCESS FULL | WC_GL_OTHER_F_MV | 1137K| 40M| | 9771 (2)| 00:0
|* 4 | HASH JOIN | | 531K| 189M| | 222K (1)| 01:06:38 | | |
| 5 | INLIST ITERATOR | | | | | | | | |
|* 6 | INDEX RANGE SCAN | W_STATUS_D_U2 | 4 | 56 | | 1 (0)| 00:00:01 |
|* 7 | HASH JOIN | | 607K| 208M| 8704K| 222K (1)| 01:06:38 | | |
|* 8 | HASH JOIN | | 40245 | 8214K| 2464K| 10843 (2)| 00:03:16 | | |
| 9 | VIEW | index$_join$_007 | 35148 | 2025K| | 122 (32)| 00:00:03 | |
|* 10 | HASH JOIN | | | | | | | | |
|* 11 | HASH JOIN | | | | | | | | |
|* 12 | HASH JOIN | | | | | | | | |
| 13 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 1 (0)| 00:00:01 | |
| 14 | BITMAP INDEX FULL SCAN | W_HIERARCHY_D_M2 | | | | | | |
| 15 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 24 (0)| 00:00:01 | |
| 16 | BITMAP INDEX FULL SCAN | W_HIERARCHY_D_M4 | | | | | | |
| 17 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 24 (0)| 00:00:01 | |
|* 18 | BITMAP INDEX FULL SCAN | X_W_HIERARCHY_D_M11 | | | | | | |
| 19 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 33 (0)| 00:00:01 | |
| 20 | BITMAP INDEX FULL SCAN | X_W_HIERARCHY_D_M12 | | | | | | |
|* 21 | HASH JOIN | | 40246 | 5895K| 4096K| 10430 (2)| 00:03:08 | |
| 22 | VIEW | index$_join$_008 | 65417 | 3321K| | 197 (14)| 00:00:04 |
|* 23 | HASH JOIN | | | | | | | | |
|* 24 | HASH JOIN | | | | | | | | |
| 25 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 3 (0)| 00:00:01 | |
| 26 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 27 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 66 (2)| 00:00:02 | |
| 28 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
| 29 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 100 (1)| 00:00:02 | |
| 30 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M3 | | | | | | |
|* 31 | HASH JOIN | | 40246 | 3851K| | 9953 (1)| 00:03:00 | | |
| 32 | VIEW | index$_join$_006 | 65417 | 1149K| | 82 (18)| 00:00:02 | |
|* 33 | HASH JOIN | | | | | | | | |
| 34 | BITMAP CONVERSION TO ROWIDS | | 65417 | 1149K| | 3 (0)| 00:00:01 | |
| 35 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 36 | BITMAP CONVERSION TO ROWIDS | | 65417 | 1149K| | 66 (2)| 00:00:02 | |
| 37 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
|* 38 | HASH JOIN | | 40246 | 3144K| | 9870 (1)| 00:02:58 | | |
| 39 | VIEW | index$_join$_005 | 65417 | 1149K| | 82 (18)| 00:00:02 | |
|* 40 | HASH JOIN | | | | | | | | |
| 41 | BITMAP CONVERSION TO ROWIDS| | 65417 | 1149K| | 3 (0)| 00:00:01 | |
| 42 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 43 | BITMAP CONVERSION TO ROWIDS| | 65417 | 1149K| | 66 (2)| 00:00:02 | |
| 44 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
|* 45 | TABLE ACCESS FULL | W_GL_ACCOUNT_D | 40246 | 2436K| | 9788 (1)| 00:02:57
| 46 | PARTITION RANGE ALL | | 11M| 4261M| | 152K (2)| 00:45:43 | 1 |1048
| 47 | TABLE ACCESS FULL | W_GL_OTHER_F | 11M| 4261M| | 152K (2)| 00:45:43
Predicate Information (identified by operation id):
2 - access("T96094"."ROW_WID"="T324628"."DOC_STATUS_WID" AND "T91397"."ROW_WID"="T324628"."GL_ACC
3 - filter("T324628"."X_CURRENCY_CODE"='CAD' AND "T324628"."DELETE_FLG"='N')
4 - access("T91707"."DOC_STATUS_WID"="T96094"."ROW_WID")
6 - access("T96094"."W_STATUS_CODE"='POSTED' OR "T96094"."W_STATUS_CODE"='REVERSED')
7 - access("T91397"."ROW_WID"="T91707"."GL_ACCOUNT_WID")
8 - access("T148543"."HIER_CODE"="T148937"."SEGMENT_LOV_ID" AND "T148543"."HIER20_CODE"="T148937"
10 - access(ROWID=ROWID)
11 - access(ROWID=ROWID)
12 - access(ROWID=ROWID)
18 - filter("T148543"."HIER4_CODE"<>'00000000000')
21 - access("T91397"."ACCOUNT_SEG2_CODE"="T148908"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG2_ATTRIB"="T148908"."SEGMENT_LOV_ID")
23 - access(ROWID=ROWID)
24 - access(ROWID=ROWID)
31 - access("T91397"."ACCOUNT_SEG3_CODE"="T148937"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG3_ATTRIB"="T148937"."SEGMENT_LOV_ID")
33 - access(ROWID=ROWID)
38 - access("T91397"."ACCOUNT_SEG1_CODE"="T149255"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG1_ATTRIB"="T149255"."SEGMENT_LOV_ID")
40 - access(ROWID=ROWID)
45 - filter("T91397"."GL_ACCOUNT_NUM" LIKE '%98%' AND "T91397"."RECON_TYPE_CODE" IS NOT NULL)
79 rows selected.user605926 wrote:
We have this query which is taking a long time to execute. From the explain plan what i found out is there is a full table scan going on W_GL_OTHER_F. Please help in identifying the problem area and solutions.
The plan is,
PLAN_TABLE_OUTPUT
Plan hash value: 3196636288
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Psto
| 0 | SELECT STATEMENT | | 810K| 306M| | 266K (1)| 01:20:03 | | |
| 1 | HASH GROUP BY | | 810K| 306M| 320M| 266K (1)| 01:20:03 | | |
|* 2 | HASH JOIN | | 810K| 306M| 38M| 239K (1)| 01:11:56 | | |
|* 3 | MAT_VIEW ACCESS FULL | WC_GL_OTHER_F_MV | 1137K| 40M| | 9771 (2)| 00:0
|* 4 | HASH JOIN | | 531K| 189M| | 222K (1)| 01:06:38 | | |
| 5 | INLIST ITERATOR | | | | | | | | |
|* 6 | INDEX RANGE SCAN | W_STATUS_D_U2 | 4 | 56 | | 1 (0)| 00:00:01 |
|* 7 | HASH JOIN | | 607K| 208M| 8704K| 222K (1)| 01:06:38 | | |
|* 8 | HASH JOIN | | 40245 | 8214K| 2464K| 10843 (2)| 00:03:16 | | |
| 9 | VIEW | index$_join$_007 | 35148 | 2025K| | 122 (32)| 00:00:03 | |
|* 10 | HASH JOIN | | | | | | | | |
|* 11 | HASH JOIN | | | | | | | | |
|* 12 | HASH JOIN | | | | | | | | |
| 13 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 1 (0)| 00:00:01 | |
| 14 | BITMAP INDEX FULL SCAN | W_HIERARCHY_D_M2 | | | | | | |
| 15 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 24 (0)| 00:00:01 | |
| 16 | BITMAP INDEX FULL SCAN | W_HIERARCHY_D_M4 | | | | | | |
| 17 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 24 (0)| 00:00:01 | |
|* 18 | BITMAP INDEX FULL SCAN | X_W_HIERARCHY_D_M11 | | | | | | |
| 19 | BITMAP CONVERSION TO ROWIDS | | 35148 | 2025K| | 33 (0)| 00:00:01 | |
| 20 | BITMAP INDEX FULL SCAN | X_W_HIERARCHY_D_M12 | | | | | | |
|* 21 | HASH JOIN | | 40246 | 5895K| 4096K| 10430 (2)| 00:03:08 | |
| 22 | VIEW | index$_join$_008 | 65417 | 3321K| | 197 (14)| 00:00:04 |
|* 23 | HASH JOIN | | | | | | | | |
|* 24 | HASH JOIN | | | | | | | | |
| 25 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 3 (0)| 00:00:01 | |
| 26 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 27 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 66 (2)| 00:00:02 | |
| 28 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
| 29 | BITMAP CONVERSION TO ROWIDS | | 65417 | 3321K| | 100 (1)| 00:00:02 | |
| 30 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M3 | | | | | | |
|* 31 | HASH JOIN | | 40246 | 3851K| | 9953 (1)| 00:03:00 | | |
| 32 | VIEW | index$_join$_006 | 65417 | 1149K| | 82 (18)| 00:00:02 | |
|* 33 | HASH JOIN | | | | | | | | |
| 34 | BITMAP CONVERSION TO ROWIDS | | 65417 | 1149K| | 3 (0)| 00:00:01 | |
| 35 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 36 | BITMAP CONVERSION TO ROWIDS | | 65417 | 1149K| | 66 (2)| 00:00:02 | |
| 37 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
|* 38 | HASH JOIN | | 40246 | 3144K| | 9870 (1)| 00:02:58 | | |
| 39 | VIEW | index$_join$_005 | 65417 | 1149K| | 82 (18)| 00:00:02 | |
|* 40 | HASH JOIN | | | | | | | | |
| 41 | BITMAP CONVERSION TO ROWIDS| | 65417 | 1149K| | 3 (0)| 00:00:01 | |
| 42 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M1 | | | | | | |
| 43 | BITMAP CONVERSION TO ROWIDS| | 65417 | 1149K| | 66 (2)| 00:00:02 | |
| 44 | BITMAP INDEX FULL SCAN | W_GL_SEGMENT_D_M2 | | | | | | |
|* 45 | TABLE ACCESS FULL | W_GL_ACCOUNT_D | 40246 | 2436K| | 9788 (1)| 00:02:57
| 46 | PARTITION RANGE ALL | | 11M| 4261M| | 152K (2)| 00:45:43 | 1 |1048
| 47 | TABLE ACCESS FULL | W_GL_OTHER_F | 11M| 4261M| | 152K (2)| 00:45:43
Predicate Information (identified by operation id):
2 - access("T96094"."ROW_WID"="T324628"."DOC_STATUS_WID" AND "T91397"."ROW_WID"="T324628"."GL_ACC
3 - filter("T324628"."X_CURRENCY_CODE"='CAD' AND "T324628"."DELETE_FLG"='N')
4 - access("T91707"."DOC_STATUS_WID"="T96094"."ROW_WID")
6 - access("T96094"."W_STATUS_CODE"='POSTED' OR "T96094"."W_STATUS_CODE"='REVERSED')
7 - access("T91397"."ROW_WID"="T91707"."GL_ACCOUNT_WID")
8 - access("T148543"."HIER_CODE"="T148937"."SEGMENT_LOV_ID" AND "T148543"."HIER20_CODE"="T148937"
10 - access(ROWID=ROWID)
11 - access(ROWID=ROWID)
12 - access(ROWID=ROWID)
18 - filter("T148543"."HIER4_CODE"<>'00000000000')
21 - access("T91397"."ACCOUNT_SEG2_CODE"="T148908"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG2_ATTRIB"="T148908"."SEGMENT_LOV_ID")
23 - access(ROWID=ROWID)
24 - access(ROWID=ROWID)
31 - access("T91397"."ACCOUNT_SEG3_CODE"="T148937"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG3_ATTRIB"="T148937"."SEGMENT_LOV_ID")
33 - access(ROWID=ROWID)
38 - access("T91397"."ACCOUNT_SEG1_CODE"="T149255"."SEGMENT_VAL_CODE" AND
"T91397"."ACCOUNT_SEG1_ATTRIB"="T149255"."SEGMENT_LOV_ID")
40 - access(ROWID=ROWID)
45 - filter("T91397"."GL_ACCOUNT_NUM" LIKE '%98%' AND "T91397"."RECON_TYPE_CODE" IS NOT NULL)
79 rows selected.
You may want to have a look at <a href="HOW TO: Post a SQL statement tuning request - template posting">HOW TO: Post a SQL statement tuning request - template posting</a> to see what more details are needed in order for somebody to provide better answer.
Based on what you have posted so far, you may want to share details of following questions (in addition to details in above link)
1) How much time does the query currently take to execute? How much time do you expect it to take? Also, how are you measuring query execution time?
2) Your plan suggests that the query is expected to return 810K rows. Is this figure close to actual number of records? What are you doing with this huge amount of data? -
How to obtain the Query Response Time of a query?
Given the Average Length of Row of tables and the number of rows in each table,
is there a way we get the query response time of a query involving
those tables. Query includes joins as well.
For example, suppose there 3 tables t1, t2, t3. I wish to obtain the
time it takes for the following query:
Query
SELECT t1.col1, t2.col2
FROM t1, t2, t3
WHERE t1.col1 = t2.col2
AND t1.col2 IN ('a', 'c', 'd')
AND t2.col1 = t3.col2
AND t2.col1 = t1.col1 (+)
ORDER BY t1.col1
Given are:
Average Row Length of t1 = 200 bytes
Average Row Length of t2 = 100 bytes
Average Row Length of t3 = 500 bytes
No of rows in t1 = 100
No of rows in t2 = 1000
No of rows in t3 = 500
What is required is the 'query response time' for the said query.I do not know how to do it myself. But if you are running Oracle 10g, I believe that there is a new tool called: SQL Tuning Advisor which might be able to help.
Here are some links I found doing a google search, and it looks like it might meet your needs and even give you more information on how to improve your code.
http://www.databasejournal.com/features/oracle/article.php/3492521
http://www.databasejournal.com/features/oracle/article.php/3387011
http://www.oracle.com/technology/obe/obe10gdb/manage/perflab/perflab.htm
http://www.oracle.com/technology/pub/articles/10gdba/week18_10gdba.html
http://www.oracle-base.com/articles/10g/AutomaticSQLTuning10g.php
Have fun reading:
You can get help from teachers, but you are going to have to learn a lot by yourself, sitting alone in a room ....Dr. Seuss
Regards
Tim -
Hi friends
how to check the query optimum time, whether the query is taking proper time for execution or not, i mean to say query is taking proper time or more time than usual, if its taking more time how to resolve it.
please forward some linksHi,
You can go to ST03 and see the complete statistics of the query.....How much time it took and at what level(OLAP, DB, Frontend) it took maximum time.
Got to ST03 --> select BI Workload --> Select the time period for which you want to see the details
Select Query Runtimes and there you can select Aggregation by Query on the right hand side .
Hope this helps. -
Hi,
I have a query which fetches around 100 records from a table which has approximately 30 million records. Unfortunately, I have to use the same table and can't go ahead with a new table.
The query executes within a second from RapidSQL. The problem I'm facing is it takes more than 10 minutes when I run it through the Java application. It doesn't throw any exceptions, it executes properly.
The query:
SELECT aaa, bbb, SUM(ccc), SUM(ddd), etc
FROM MyTable
WHERE SomeDate= date_entered_by_user AND SomeString IN ("aaa","bbb")
GROUP BY aaa,bbbI have an existing clustered index on SomeDate and SomeString fields.
To check I replaced the where clause with
WHERE SomeDate= date_entered_by_user AND SomeString = "aaa"No improvements.
What could be the problem?
Thank you,
LoboIt's hard for me to see how a stored proc will address this problem. I don't think it changes anything. Can you explain? The problem is slow query execution time. One way to speed up the execution time inside the RDBMS is to streamline the internal operations inside the interpreter.
When the engine receives a command to execute a SQL statement, it does a few things before actually executing the statement. These things take time. First, it checks to make sure there are no syntax errors in the SQL statement. Second, it checks to make sure all of the tables, columns and relationships "are in order." Third, it formulates an execution plan. This last step takes the most time out of the three. But, they all take time. The speed of these processes may vary from product to product.
When you create a stored procedure in a RDBMS, the processes above occur when you create the procedure. Most importantly, once an execution plan is created it is stored and reused whenever the stored procedure is ran. So, whenever an application calls the stored procedure, the execution plan has already been created. The engine does not have to anaylze the SELECT|INSERT|UPDATE|DELETE statements and create the plan (over and over again).
The stored execution plan will enable the engine to execute the query faster.
/> -
Table defination in datatype size can effect on query execution time.
Hello Oracle Guru,
I have one question , suppose I have create one table with more than 100 column
and i tacke every column datatype varchar2(4000).
Actual data are in every column not more than 300 character so in this case
if i execute only select query
so oracle cursor internaly read up to 4000 character one by one
or it read character one by one and in last character ex. 300 it will stop there.
If i reduce varchar2 size 300 instend of 4000 in table defination,
so is it effect on select query execution time ?
Thanks in advance.When you declare VARCHAR2 column you specify maximum size that can be stored in that column. Database stores actual number of bytes (plus 2 bytes for length). So if yiou insert 300 character string, only 302 bytes will be used (assuming database character set is single byte character set).
SY. -
How to know query execution time in sql plus
HI
I want to know the query execution time in sql plus along with statistics
I say set time on ;
set autotrace on ;
select * from view where usr_id='abcd';
if the result is 300 rows it scrolls till all the rows are retrieved and finally gives me execution time as 40 seconds or 1 minute.. (this is after all the records are scrolled )
but when i execute it in toad it gives 350 milli seconds..
i want to see the execution time in sql how to do this
database server 11g and client is 10g
regards
rajwhat is the difference between .. the
statistics gathered in sql plus something like this and the one that i get from plan_table in toad?
how to format the execution plan I got in sqlplus in a proper understanding way?
statistics in sqlplus
tatistics
0 recursive calls
0 db block gets
164 consistent gets
0 physical reads
0 redo size
29805 bytes sent via SQL*Net to client
838 bytes received via SQL*Net from client
25 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
352 rows processedexecution plan in sqlplus... how to format this
xecution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=21 Card=1 Bytes=10
03)
1 0 HASH (UNIQUE) (Cost=21 Card=1 Bytes=1003)
2 1 MERGE JOIN (CARTESIAN) (Cost=20 Card=1 Bytes=1003)
3 2 NESTED LOOPS
4 3 NESTED LOOPS (Cost=18 Card=1 Bytes=976)
5 4 NESTED LOOPS (Cost=17 Card=1 Bytes=797)
6 5 NESTED LOOPS (OUTER) (Cost=16 Card=1 Bytes=685)
7 6 NESTED LOOPS (OUTER) (Cost=15 Card=1 Bytes=556
8 7 NESTED LOOPS (Cost=14 Card=1 Bytes=427)
9 8 NESTED LOOPS (Cost=5 Card=1 Bytes=284)
10 9 TABLE ACCESS (BY INDEX ROWID) OF 'USR_XR
EF' (TABLE) (Cost=4 Card=1 Bytes=67)
11 10 INDEX (RANGE SCAN) OF 'USR_XREF_PK' (I
NDEX (UNIQUE)) (Cost=2 Card=1)
12 9 TABLE ACCESS (BY INDEX ROWID) OF 'USR_DI
M' (TABLE) (Cost=1 Card=1 Bytes=217)
13 12 INDEX (UNIQUE SCAN) OF 'USR_DIM_PK' (I
NDEX (UNIQUE)) (Cost=0 Card=1)
14 8 TABLE ACCESS (BY INDEX ROWID) OF 'HDS_FCT'
(TABLE) (Cost=9 Card=1 Bytes=143)
15 14 INDEX (RANGE SCAN) OF 'HDS_FCT_IX2' (IND
EX) (Cost=1 Card=338)
16 7 TABLE ACCESS (BY INDEX ROWID) OF 'USR_MEDIA_
COMM' (TABLE) (Cost=1 Card=1 Bytes=129)
17 16 INDEX (UNIQUE SCAN) OF 'USR_MEDIA_COMM_PK'
(INDEX (UNIQUE)) (Cost=0 Card=1)
18 6 TABLE ACCESS (BY INDEX ROWID) OF 'USR_MEDIA_CO
MM' (TABLE) (Cost=1 Card=1 Bytes=129)
19 18 INDEX (UNIQUE SCAN) OF 'USR_MEDIA_COMM_PK' (
INDEX (UNIQUE)) (Cost=0 Card=1)
20 5 TABLE ACCESS (BY INDEX ROWID) OF 'PROD_DIM' (TAB
LE) (Cost=1 Card=1 Bytes=112)
21 20 INDEX (UNIQUE SCAN) OF 'PROD_DIM_PK' (INDEX (U
NIQUE)) (Cost=0 Card=1)
22 4 INDEX (UNIQUE SCAN) OF 'CUST_DIM_PK' (INDEX (UNIQU
E)) (Cost=0 Card=1)
23 3 TABLE ACCESS (BY INDEX ROWID) OF 'CUST_DIM' (TABLE)
(Cost=1 Card=1 Bytes=179)
24 2 BUFFER (SORT) (Cost=19 Card=22 Bytes=594)
25 24 INDEX (FAST FULL SCAN) OF 'PROD_DIM_AK1' (INDEX (UNI
QUE)) (Cost=2 Card=22 Bytes=594) -
Query Execution Time for a Query causing ORA-1555
dear Gurus
I have ORA-01555 error , earlier I used the Query Duration mentioned in Alert Log and increased the Undo Retention as I did not find th UnDOBLKS column of v$undostat high for the time of occurence of ORA-01555..
But new ORA-01555 is coming whose query duration exceeds Undo Retention time..
My question -
1. Is it possible to accurately find the query duration time besides the Alert Log file ?abhishek, as you are using an undo tablespace and have already increased the time that undo data is retained via undo_retention then you might want to consider the following ideas which were useful with 1555 error under manual rbs segment management.
1- Tune the query. The faster a query runs the less likely a 1555 will occur.
2- Look at the processing. If a process was reading and updating the same table while committing frequenctly then the process under manual rbs management would basically create its own 1555 error rather than just being the victum of another process changing data and the rbs data being overlaid while the long running query was still running. With undo management the process could be generating more data than can be held for the undo_retention period but because it is committed Oracle has been told it doesn't really have to keep the data for use rolling back a current transaction so it gets discarded to make room for new changes.
If you find item 2 is true then separating the select from the update will likely eliminate the 1555. You do this by building a driving table that has the keys of the rows to be updated or deleted. Then you use the driver to control accessing the target table.
3- If the cause of the 1555 is or may be delayed block cleanout then select * from the target prior to running the long running query.
Realistically you might need to increase the size of the undo tablespace to hold all the change data and the value of the undo_retention parameter to be longer than the job run time. Which brings up back to option 1. Tune every query in the process so that the job run time is reduced to optimal.
HTH -- Mark D Powell --
dear mark
Thanks for the excellent advise..I found that the error is coming because of frequent commits..which is item 2 as u righly mentioned ..
I think I need to keep a watch on the queries running , I was just trying to find the execution time for the queries..If there is any way to find the query duration without running a trace ..
regards
abhishek -
How to get Query response Time?
II am on BI 7.0. I ran some queries using RSRT command. I want to find how much time the queries took.
I went to
st03 -> expert mode -> BI system load-> select today / week/month according to the query runtime day
I do not see any Info Providers. Query was on a cube so why no Info Providers.
Does something have to turned on InfoPorvider to show.
When I look in RSDDSTAT_OLAP table, I do see many rows but cannot make any sense. Is there some documentation on how to get total query time from this table?
Is there any other way to get query response time?
Thanks a lot.HI,
why not use RSRT ? You can add database statistics option in "Execut & Debug" and you get all the runtime metrics of your query
In transaction RSRT, enter the query name and press u2018Execute +Debugu2019.
Selecting u2018Display Statistics Datau2019 .
After executing the query will return a list of the measured metrics.
The event id / text describes the steps (duration in seconds):
"OLAP: Read data" gives the SQL statements repsonse time (ok - because the SAP
application server acts as an Oracle client a little network traffic from the db server is included,
but as far as you not transferring zillions of rows it can be ignored)
But it gives you much more (i.e. if the OLAP cache gets used or not )...
In the "Aggreagate statistcs" you get all the infoproviders involved in that query.
bye
yk -
How to know exact query execution time.
Hi,
I want to know what is exact query execution time which excludes network access, constructing results. Please suggest me.
Thanks in advance,
Satish.GNot sure I know what you really want, but if this is a testing phase sort of thing and you are running on unix, there is the time command.
If you want something that is part of your application, you can run a timer in a separate Thread - for approximate time.
If you want more precise time, you can use java.util.Date - it has a method getTime(), which you can use before processing and again after processing and subtract the former from the later. -
Conversion of Mysql query in oracle acceptable query format
Hi
I have successfully converted my MySql database in oracle. Now the problem is how to execute already written hundreds of Mysql query on the oracle. There are many syntax variation in Mysql query format which is not acceptable for oracle.
For Example
Select case_id as 'this is alias' from cases
The above query can run on Mysql database but have problem while executing Oracle, because single quotes should be replaced with double quotes before executing it on oracle. There are also many other syntax conflicts.
I have tried to resolve the problem through SwisSQLAPI but problem still exist as SwisSQLAPI is not dealing with all syntax conflict. In my case (select if (expresion, true,false)) must be replace with decode (expression, value,true,false) function of oracle and this conversion is not supported by SwisSQLAPI.
Please help me in resolving this problem
ThanksThe problem with trying to port from one language (mysql SQL) to another (oracle SQL) is that there's generally no hard rules for a computer to follow, that it will get it 100% correct.
Look at babelfish when you translate a foreign language to English. The end result is readable (usually), but it's rarely completely correct.
The problem is when you feed something into Oracle SQL, it needs to be 100% correct.
All you can really do here is rewrite these queries. It shouldn't actually take as long as you think, because 50% of queries will generally need very minor changes you can do in a minute, and 25% won't need any changes at all. -
I have had an iphone 3, 4, 4S, and now a 5. Every picture I have ever taken is saved as "photo". How do I change the auto naming format to a more sequential name?
There is no way to change this.
Maybe you are looking for
-
While trying to change my Credit Card info: The app store tells me that my payment method information does not match my bank records and to try again or enter a new payment method. My CC is valid and its the same CC ive had for a while. This CC info
-
Subcontrating Purchase order with Validity Period.
Hii, Our user here using validity period to restrict GR creation or invoice verification process against the order if posting date is not within validity period. There is certain case whereby GR have been received in full quantity examle; PO Item for
-
How struts invoking form getters and setters of a property
hi, can anyone explain me how struts framework was calling the form properties getters and setters i have an idea like preparing a string such that if propert of text box is propertName, preparring a string with get and set making first letter capita
-
Hi all, During this month's update cycle we ran into an issue with KB2553284 for office 2010. We were trying to deploy to our Windows 7 32 bit lab clients, and the deployment would stall with no apparent errors, the machines would not even get the s
-
Accounting entries(Debit/Credit) in Oracle Project Suite
Hi, Please provide me the accounting entries(Debit/Credit) for following cycles in Oracle Project Suites 1>P2P cycle using Oracle Projects a> PR-PO-RECEIPT->AP->PROJECTS B>AP->PROJECTS 2>Miscelenous Issue to Projects 3>Projects to Assets