Why execution of jar is taking larger time
Hi,
I have the signed applet code in JDK1.1, and for the same code base size of jar file is larger then cab file, this is because of insertion of Manifest files in the jar.
Now I am executing the same code using both JRE(jar file) and MVM(cab file). But I am observing time difference between the two executions.
Using JRE 1.5.0_07, times comes out to be approximately 2 times of the time taken by execution using MVM.
Can anyone tell me the reason , why execution of jar is taking larger time then that of cab.
Thanks in advance.
Nitin Chaudhary
Please do not post the same question in 7 different forums.
Make all replies here:
http://forum.java.sun.com/thread.jspa?threadID=746635&messageID=4273054#4273054
Similar Messages
-
Why my adobe edge animation taking load time too much
Hi
why my adobe edge animation taking load time too much.
Plsease guide me.Hi, Rohan-
Can you upload your file for us to take a look at? It's hard for us to tell what's going on without taking a look.
Thanks,
-Elaine -
Why oracle text index column taking long time
why oracle text index column is taking long time to return result.I created text index on a column if I run the query on a single table result is very fast.If I join table with other table (10 records only )
it is taking long time but in explain plan it is searching by index only.
I created this index for searching a varchar2 column,the data is comma seperated values like ( UK,US,IT,BR) and the table having records 20 lakhs.Normally if I query with like operater
( like '%US%' ) it is taking full table scan because I am using '%' both sides. Please help me on this regard how to search the data with less time. Here is may sample code and explain plan.
SQL*Plus: Release 9.2.0.1.0 - Production on Wed Jan 28 16:54:22 2009
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
SQL> set timing on
SQL> set linesize 180
SQL> explain plan for SELECT T.esongid FROM (SELECT A.ESONGID FROM wcmedeco.EDECO_ESONGS_TERR_CTRY
A WHERE CONTAINS(A.TERR_CTRY_NAMES,'US')>0
2 GROUP BY A.ESONGID)K,T
3 WHERE K.ESONGID=T.ESONGID;
Explained.
Elapsed: 00:00:00.01
SQL> select *from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 1 | 26 | 4 |
| 1 | NESTED LOOPS | | 1 | 26 | 4 |
| 2 | VIEW | | 1 | 13 | 4 |
| 3 | SORT GROUP BY | | 1 | 89 | 4 |
| 4 | TABLE ACCESS BY INDEX ROWID| EDECO_ESONGS_TERR_CTRY | 1 | 89 | 2 |
| 5 | DOMAIN INDEX | IDX_TERR_CTRY_NAMES | | | 0 |
| 6 | INDEX RANGE SCAN | IDX_ESONGID_T | 1 | 13 | 1 |
PLAN_TABLE_OUTPUT
Note: cpu costing is off, 'PLAN_TABLE' is old version
14 rows selected.
Elapsed: 00:00:00.00
SQL> Regards,
RajasekharYou have not formatted your code properly so we cannot see the query you're executing. Please put some line breaks in.
Secondly, how fresh are the statistics on those tables? Are you really returning one record out of twenty million?
Cheers, APC
blog: http://radiofreetooting.blogspot.com -
Why process chain activation is taking more time?
Hi BW Gurus,
I am going to activate the process chain, but it takes more time to activate the process chain . What may be the reasons ?
Regards,
RajuHi,
Process chain activation becomes slow due to the following reasons.
1)Large no of entities in RSPCCHAIN Table i.e, there is lot of process chains and very large process chains.
2)There is a missing index for the RSPCCHAIN Table.
Solution for this problem is we have to create an index for the following fields of the RSPCCHAIN table.
EVENT_GREEN
EVENT_PGREEN
EVENT_RED
EVENT_PRED
Still if you want further clarification on this , Refer the following sap notes.
906512---Activating process chain is very slow.
Related Notes: 872275, 872273& 872271 .
Regards,
Chendrasekhar -
We are running a report ? it is taking long time for execution. what step
we are running a report ? it is taking long time for execution. what steps will we do to reduce the execution time?
Hi ,
the performance can be improved thru many ways..
First try to select based on the key fields if it is a very large table.
if not then create a Secondary index for selection.
dont perform selects inside a loop , instead do FOR ALL ENTRIES IN
try to perform may operations in one loop rather than calling different loops on the same internal table..
All these above and many more steps can be implemented to improve the performance.
Need to look into your code, how it can be improved in your case...
Regards,
Vivek Shah -
VB Macro in Bex Analyser is taking long time to complete execution
Hi Experts,
In a FI query , we have a VB macro which update the excel sheets by taking values from the previous excel sheets .
The issue is its taking long time for query execution and if we are keep on pressing 'ENTER' button . The query is running very fastly and is giving the results ,but its not a correct way to do.
Its a critical issue in Production and can anyone help me to resolve the issue.
Regards,
AnjuHi Anju,
I need to create a VB macro in one of my query but i not familiar how i can plug the VB code into the query.
Please can you give me some basic procedure on how to do that?
What i need to do exactly is to bring a KF from a query into another query. Not sure if this is possible using VB.
Let me know.
Thanks.
Cesar -
Suddenly ODI scheduled executions taking more time than usual.
Hi,
I have set ODI packages scheduled for execution.
From some days those are taking more time to execute themselves.
Before they used to take 1 hr 30 mins approx.
Now they are taking 3 - 3 hr 15 mins approx.
And there no any major change in data in terms of Quantity.
My ODI version s
Standalone Edition Version 11.1.1
Build ODI_11.1.1.3.0_GENERIC_100623.1635
ODI packages are mainly using Oracle as SOURCE and TARGET DB.
What things should i check to get to know reasons of sudden increase in time of execution.
Any pointers regarding this would be appreciated.
Thanks,
MaheshMahesh,
Use some repository queries to retrieve the session task timings and compare your slow execution to a previous acceptable execution, then look for the biggest changes - this will highlight where you are slowing down, then its off to tune the item accordingly.
See here for some example reports , you might need to tweak for your current repos version but I dont think the table structures have changed that much :
http://rnm1978.wordpress.com/2010/11/03/analysing-odi-batch-performance/ -
Procedure is taking more time for execution
hi,
when i am trying to execute the below procedure it is taking more time for
execution.
can you pls suggest the possaible ways to tune the query.
PROCEDURE sp_sel_cntr_ri_fact (
po_cntr_ri_fact_cursor OUT t_cursor
IS
BEGIN
OPEN po_cntr_ri_fact_cursor FOR
SELET c_RI_fAt_id, c_RI_fAt_code,c_RI_fAt_nme,
case when exists (SELET 'x' FROM A_CRF_PARAM_CALIB t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_EMPI_ERV_CALIB_DETAIL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_IC_CNTRY_IC_CRF_MPG_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_IC_CRF_CNTRYIDX_MPG_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_IC_CRF_RESI_COR t WHERE t.x_axis_c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_IC_CRF_RESI_COR t WHERE t.y_axis_c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM A_PAR_MARO_GAMMA_PRIME_CALIB t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM D_ANALYSIS_FAT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM D_CALIB_CNTRY_RI_FATOR t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_BUSI_PORT_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_CNTRY_LOSS_DIST_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_CNTRY_LOSS_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_CRF_BUS_PORTFOL_CRITERIA t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_CRF_CORR_RSLT t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
when exists (SELET 'x' FROM E_HYPO_PORTF_DTL t WHERE t.c_RI_fAt_id = A_IC_CNTR_RI_FAT.c_RI_fAt_id)
then 'Yes'
else
'No'
end used_analysis_ind,
creation_date, datetime_stamp, user_id
FROM A_IC_CNTR_RI_FAT
ORDER BY c_RI_fAt_id_nme DESC;
END sp_sel_cntr_ri_fact;[When your query takes too long...|http://forums.oracle.com/forums/thread.jspa?messageID=1812597]
-
SSRS Reports taking long time to load
Hello,
Problem : SSRS Reports taking long time to load
My System environment : Visual Studio 2008 SP1 and SQL Server 2008 R2
Production Environment : Visual Studio 2008 SP1 and SQL Server 2008 R2
I have created a Parameterized report (6 parameters), it will fetch data from 1 table. table has 1 year and 6 months data, I am selecting parameters for only 1 month (about 2500 records). It is taking almost 2 minutes and 30 seconds
to load the report.
This report running efficiently in my system (report load takes only 5 to 6 seconds) but in
production it is taking 2 minutes 30 seconds.
I have checked the Execution log from production so I found the timing for
Data retrieval (approx~) Processing (approx~) Rendering (approx~)
10 second 15 sec
2 mins and 5 sec.
But Confusing point is that , if I run the same report at different time overall output time is same (approx) 2 min 30 sec but
Data retrieval (approx~) Processing (approx~) Rendering (approx~)
more than 1 min 15 sec
more than 1 min
so 1 question why timings are different ?
My doubts are
1) If query(procedure to retrieve the data) is the problem then it should take more time always,
2) If Report structure is problem then rendering will also take same time (long time)
for this (2nd point) I checked on blog that Rendering depends on environment structure e.g. Network bandwidth, RAM, CPU Usage , Number of users accessing same report at a time.
So I did testing of report when no other user working on any report But failed (same result output is 2 min 30 sec)
From network team I got the result is that there is no issue or overload in CPU usage or RAM also No issue in Network bandwidth.
Production Database Server and Report server are different (but in same network).
I checked that database server the SQL Server is using almost Full RAM (23 GB out of 24 GB)
I tried to allocate the memory to less amount up to 2GB (Trial solution I got from Blogs) but this on also failed.
one hint I got from colleague that , change the allocated memory setting from static memory to dynamic to SQL Server
(I guess above point is the same) I could not find that option Static and Dynamic memory setting.
I did below steps
Connected to SQL Server Instance
Right click on Instance go to properties, Go to Memory Tab
I found three options 1) Server Memory 2) Other memory 3) Section for "Configured values and Running values"
Then I tried to reduce Maximum Server memory up to 2 GB (As mentioned above)
All trials failed, this issue I could not find the roots for this issue.
Can anyone please help (it's bit urgent).Hi UdayKGR,
According to your description, your report takes too long to load on your production environment. Right?
In this scenario, since the report runs quickly in developing environment, we initially think it supposed to be the issue on data retrieval. However, based on the information in execution log, it takes longest time on rendering part. So we suggest you optimize
the report itself to reduce the time for rendering. Please refer to the link below:
My report takes too long to render
Here is another article about overall performance optimization for Reporting Services:
Reporting Services Performance and Optimization
If you have any question, please feel free to ask.
Best Regards,
Simon Hou -
Query in timesten taking more time than query in oracle database
Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE DATE,
IF_SYSTEM VARCHAR2(32) NOT NULL,
GROUPING_ID TT_BIGINT,
TIME_DIM_ID TT_INTEGER NOT NULL,
BUSINESS_DIM_ID TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID TT_INTEGER NOT NULL,
INSTR_DIM_ID TT_INTEGER NOT NULL,
EXECUTION_DIM_ID TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS TT_BIGINT,
FILLED_QUANTITY TT_BIGINT,
CNT_FILLED_QUANTITY TT_BIGINT,
QUANTITY TT_BIGINT,
CNT_QUANTITY TT_BIGINT,
COMMISSION BINARY_FLOAT,
CNT_COMMISSION TT_BIGINT,
FILLS_NUMBER TT_BIGINT,
CNT_FILLS_NUMBER TT_BIGINT,
AGGRESSIVE_FILLS TT_BIGINT,
CNT_AGGRESSIVE_FILLS TT_BIGINT,
NOTIONAL BINARY_FLOAT,
CNT_NOTIONAL TT_BIGINT,
TOTAL_PRICE BINARY_FLOAT,
CNT_TOTAL_PRICE TT_BIGINT,
CANCELLED_ORDERS_COUNT TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT TT_BIGINT,
ROUTED_ORDERS_NO TT_BIGINT,
CNT_ROUTED_ORDERS_NO TT_BIGINT,
ROUTED_LIQUIDITY_QTY TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY TT_BIGINT,
REMOVED_LIQUIDITY_QTY TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY TT_BIGINT,
ADDED_LIQUIDITY_QTY TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY TT_BIGINT,
AGENT_CHARGES BINARY_FLOAT,
CNT_AGENT_CHARGES TT_BIGINT,
CLEARING_CHARGES BINARY_FLOAT,
CNT_CLEARING_CHARGES TT_BIGINT,
EXECUTION_CHARGES BINARY_FLOAT,
CNT_EXECUTION_CHARGES TT_BIGINT,
TRANSACTION_CHARGES BINARY_FLOAT,
CNT_TRANSACTION_CHARGES TT_BIGINT,
ORDER_MANAGEMENT BINARY_FLOAT,
CNT_ORDER_MANAGEMENT TT_BIGINT,
SETTLEMENT_CHARGES BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES TT_BIGINT,
RECOVERED_AGENT BINARY_FLOAT,
CNT_RECOVERED_AGENT TT_BIGINT,
RECOVERED_CLEARING BINARY_FLOAT,
CNT_RECOVERED_CLEARING TT_BIGINT,
RECOVERED_EXECUTION BINARY_FLOAT,
CNT_RECOVERED_EXECUTION TT_BIGINT,
RECOVERED_TRANSACTION BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION TT_BIGINT,
RECOVERED_ORD_MGT BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT TT_BIGINT,
RECOVERED_SETTLEMENT BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT TT_BIGINT,
CLIENT_AGENT BINARY_FLOAT,
CNT_CLIENT_AGENT TT_BIGINT,
CLIENT_ORDER_MGT BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT TT_BIGINT,
CLIENT_EXEC BINARY_FLOAT,
CNT_CLIENT_EXEC TT_BIGINT,
CLIENT_TRANS BINARY_FLOAT,
CNT_CLIENT_TRANS TT_BIGINT,
CLIENT_CLEARING BINARY_FLOAT,
CNT_CLIENT_CLEARING TT_BIGINT,
CLIENT_SETTLE BINARY_FLOAT,
CNT_CLIENT_SETTLE TT_BIGINT,
CHARGEABLE_TAXES BINARY_FLOAT,
CNT_CHARGEABLE_TAXES TT_BIGINT,
VENDOR_CHARGE BINARY_FLOAT,
CNT_VENDOR_CHARGE TT_BIGINT,
ROUTING_CHARGES BINARY_FLOAT,
CNT_ROUTING_CHARGES TT_BIGINT,
RECOVERED_ROUTING BINARY_FLOAT,
CNT_RECOVERED_ROUTING TT_BIGINT,
CLIENT_ROUTING BINARY_FLOAT,
CNT_CLIENT_ROUTING TT_BIGINT,
TICKET_CHARGES BINARY_FLOAT,
CNT_TICKET_CHARGES TT_BIGINT,
RECOVERED_TICKET_CHARGES BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns ORACLE TimesTen
1 1.05 0.94
2 1.07 1.47
3 2.04 1.8
4 2.06 2.08
5 2.09 2.4
6 3.01 2.67
7 4.02 3.06
8 4.03 3.37
9 4.04 3.62
10 4.06 4.02
11 4.08 4.31
12 4.09 4.61
13 5.01 4.76
14 5.02 5.06
15 5.04 5.25
16 5.05 5.48
17 5.08 5.84
18 6 6.21
19 6.02 6.34
20 6.04 6.75 -
DB shutdown is taking long time
Friends, for last few days we are observing that it was taking at least 20 mins to shutdown the db using 'shu immeditale',though there are no hints behind this as seen in alert log files. May be large number of user session and local and remote connections be the reason behind this. Can anyone provide me any such common script that will terminate all user session and all connections to the db before the shutdown except the current session? Any help will be highly appreciated as always.
918868 wrote:
Thanks for your suggestions. Can you provide me any such script that will terminate all user session and all connections to the db before the shutdown except the current session?
The version of DB is 11g R1 ie . 11.2.0.1.0
Edited by: 918868 on Feb 22, 2013 8:37 PM
>Thanks for your suggestions. Can you provide me any such script that will terminate all user session and all connections to the db before the shutdown except the current session?
The version of DB is 11g R1 ie . 11.2.0.1.0
Edited by: 918868 on Feb 22, 2013 8:37 PM
Such a script would be essentially doing what the shutdown immediate is doing - closing sessions and rolling back in-flight transaction, cleaning up orphans. I wouldn't expect such a script to do that any faster than the shutdown command does.
If this is on nix, you could start another terminal session and to a 'tail -f' on the alert log and see where it is taking its time. If it's windows, you'd need to wait until the shutdown is complete, then examine the alert log for clues. One more reason why nix is far superior to Windblows.
Edited by: EdStevens on Feb 23, 2013 8:04 AM -
Adding column is taking much time. How to avoid?
ALTER TABLE CONTACT_DETAIL
ADD (ISIMDSCONTACT_F NUMBER(1) DEFAULT 0 NOT NULL
,ISREACHCONTACT_F NUMBER(1) DEFAULT 0 NOT NULL
Is there any way that to speed up the execution time of the query?
It's more than 24 hrs completed after started running the above script.
I do not know why it is taking much time.
Size of the table is 30 MB.To add a column the row directory of every record must be rewritten.
Obviously this will take time and produce redo.
Whenever something is slow the first question you need to answer is
'What is it waiting for?' You can do so by investigating by various v$ views.
Also, after more than 200 'I can not be bothered to do any research on my own' questions, you should know you don't post here without posting a four digit version number and a platform,
as volunteers aren't mind readers.
If you want to continue to withheld information, please consider NOT posting here.
Sybrand Bakker
Senior Oracle DBA
Experts: those who did read documentatiion and can be bothered to investigate their own problems. -
Create index is taking more time
Hi,
One of the concurrent program is taking more time , We generate the trace file and found that the create index is taking more time.
Below is from the trace file and such type of index creation is happening lot of time in Oracle standard program.
Can somebody let me know why there is a big difference between cpu and elapse time.
We are seeing the PX Deq: Execute Reply Event as well.look idle time for database.
Please let me know which parameter of the database is affecting this.
CREATE INDEX ITEM_CATEGORIES_N2_BD9 ON ITEM_CATEGORIES_BD9(CATEGORY_SET_ID,
SR_CATEGORY_ID,ORGANIZATION_ID,SR_INSTANCE_ID) PARALLEL TABLESPACE MSCX
STORAGE( INITIAL 40960 NEXT 33554432 PCTINCREASE 0) PCTFREE 10 INITRANS 11
MAXTRANS 255
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 3 0 0
Execute 1 0.35 364.82 131168 117945 60324 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.35 364.83 131168 117948 60324 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 80 (recursive depth: 2)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
reliable message 1 0.00 0.00
enq: KO - fast object checkpoint 1 0.01 0.01
PX Deq: Join ACK 6 0.00 0.00
PX Deq Credit: send blkd 112 0.00 0.01
PX qref latch 7 0.00 0.00
PX Deq: Parse Reply 3 0.00 0.00
PX Deq: Execute Reply 604 1.96 364.42
log file sync 1 0.00 0.00
PX Deq: Signal ACK 1 0.00 0.00
latch: session allocation 2 0.00 0.00
Regards,user12121524 wrote:
CREATE INDEX ITEM_CATEGORIES_N2_BD9 ON ITEM_CATEGORIES_BD9(CATEGORY_SET_ID,
SR_CATEGORY_ID,ORGANIZATION_ID,SR_INSTANCE_ID) PARALLEL TABLESPACE MSCX
STORAGE( INITIAL 40960 NEXT 33554432 PCTINCREASE 0) PCTFREE 10 INITRANS 11
MAXTRANS 255
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 3 0 0
Execute 1 0.35 364.82 131168 117945 60324 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.35 364.83 131168 117948 60324 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 80 (recursive depth: 2)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
reliable message 1 0.00 0.00
enq: KO - fast object checkpoint 1 0.01 0.01
PX Deq: Join ACK 6 0.00 0.00
PX Deq Credit: send blkd 112 0.00 0.01
PX qref latch 7 0.00 0.00
PX Deq: Parse Reply 3 0.00 0.00
PX Deq: Execute Reply 604 1.96 364.42
log file sync 1 0.00 0.00
PX Deq: Signal ACK 1 0.00 0.00
latch: session allocation 2 0.00 0.00
What you've given us is the query co-ordinator trace, which basically tells us that the the coordinator waited 364 seconds for the PX slaves to tell it that they had completed their tasks ("PX Deq: Execute Reply" time). You need to look at the slave traces to find out where they spent their time - and that's probably not going to be easy if there are lots of parallel pieces of processing going on.
If you want to do some debugging (in general) one option is to add a query against V$pq_tqstat after each piece of parallel processing and log the results to a named file, or write them to a table with a tag, as this will tell you how many slaves were involved, how, and what the distribution of work and time was.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Hi All,
I am trying to run one SELECT statement which uses 6 tables. That query generally take 25-30 minutes to generate output.
Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.
What else I should check in order to figure out why my SELECT statement is taking time?
Any help will be much appreciated.
Thanks!Please let me know if you still want me to provide all the information mentioned in the link.Yes, please.
Before you can even start optimizing, it should be clear what parts of the query are running slow.
The links contains the steps to take regarding how to identify the things that make the query run slow.
Ideally you post a trace/tkprof report with wait events, it'll show on what time is being spent, give an execution plan and a database version all in once...
Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.Well, something must have changed.
And you must indentify what exactly has changed, but it's a broad range you have to check:
- it could be outdated table statistics
- it could be data growth or skewness that makes Optimizer choose a wrong plan all of a sudden
- it could be a table that got modified with some bad index
- it could be ...
So, by posting the information in the link, you'll leave less room for guesses from us, so you'll get an explanation that makes sense faster or, while investigating by following the steps in the link, you'll get the explanation yourself. -
Essbase Model taking more time
HI,
I am trying to create a essbase model in ODI. But it is taking more time. The status in operator , execution tab showing Running. I waited more than half of the day.
Colud any one help on this.
Regards
KumarKJohn ,
Could you please tell me what is the reason to make the reverse engineer process is working better when using agent ?
KumarK ,
I notice that sometimes when you reverse model from essbase to ODI it appear to have a LOCK process on ODI Work Repository.
thing that you need to do is stop that job in the operator by right click on it and choose stop.
I don't know why does this situation happen.
May be it is just a bug or something.
May be john has a reason that why using agent do the reverse engineering is better than local(no agent) ?
Thanks
Maybe you are looking for
-
External Hard Drives Changing Permissions to Read Only
Three of my USB external hard drives have suddenly changed to read only. One of the times I was trying to transfer/back-up the files from one drive that had changed to read only to another external. Suddenly the second drive changed permissions. Then
-
i'm having a ipod nano which vl b in rectangular shape of 2" and when i connect my iphone's earpods d control keys are not workin. may i know d reason y its nt supporting?
-
Changing Ipod connection fom Windows to Mac.
I am trying to change over my Ipod to update from my Mac instead of my PC. Have downloaded the Ipod updater using software update my Mac so presumably it is the correct format (ie not Windows) Followed the steps advised in previous related topics, ie
-
How will find tables involved in datasource 2LIS_40_S278
Hi Experts, Can you tell me the tables involved in datasource 2LIS_40_S278 and process how will find the details of tables related to a particular datasource?? Thanks in advance! Sapna
-
Overriding Defaults for "Reply To" and [Reply] "From
When I reply to a message, Mail selects values for the Reply To field and the From field. I can override these manually, but is there a way to set a different default? I could not see any way to do this with Rules. I suspect there is a way to write a