Code is taking more time on DB.
Hi,
The following code is taking more time on DB.
Any changes can be done to this code to improve the performance..
FORM list_name.
DATA: adr_form TYPE anrex.
LOOP AT it_p0002 INTO wa_p0002
WHERE begda LE pn-endda
AND endda GE pn-begda.
CLEAR: adr_form, wa_name.
SELECT SINGLE * FROM t522t "1
WHERE anred EQ wa_p0002-anred "1
AND sprsl EQ 'EN'. "1
IF sy-subrc EQ 0. "1
adr_form = t522t-atext. "1
ENDIF. "1
CONCATENATE adr_form wa_p0002-vorna wa_p0002-nachn INTO
wa_name-cname SEPARATED BY space."complete name
wa_name-gbdat = wa_p0002-gbdat.
ENDLOOP.
IF sy-subrc NE 0.
wa_name-cname = text-n00.
PERFORM error_list USING 'PA0002' text-e02.
ENDIF.
ENDFORM.
take the select statement ou of the loop
and use "for all entries" in the internal table.
then work on the internal tabel inside the loop
Similar Messages
-
CDP Performance Issue-- Taking more time fetch data
Hi,
I'm working on Stellent 7.5.1.
For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
public void getManager(final HashMap binderMap)
throws VistaInvalidInputException, VistaDataNotFoundException,
DataException, ServiceException, VistaTemplateException
String collectionID =
getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
long firstStartTime = System.currentTimeMillis();
HashMap resultSetMap = null;
String isNonRecursive = getStringLocal(VistaFolderConstants
.ISNONRECURSIVE_KEY);
if (isNonRecursive != null
&& isNonRecursive.equalsIgnoreCase(
VistaContentFetchHelperConstants.STRING_TRUE))
VistaLibraryContentFetchManager libraryContentFetchManager =
new VistaLibraryContentFetchManager(
binderMap);
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
resultSetMap = libraryContentFetchManager
.getFolderContentItems(m_workspace);
* used to add the resultset to the binder.
addResultSetToBinder(resultSetMap , true);
else
long startTime = System.currentTimeMillis();
* isStandard is used to decide the call is for Standard or
* Extended.
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
String isStandard = getTemplateInformation(binderMap);
long endTimeTemplate = System.currentTimeMillis();
binderMap.put(VistaFolderConstants.IS_STANDARD,
isStandard);
long endTimebinderMap = System.currentTimeMillis();
VistaContentFetchManager contentFetchManager
= new VistaContentFetchManager(binderMap);
long endTimeFetchManager = System.currentTimeMillis();
resultSetMap = contentFetchManager
.getAllFolderContentItems(m_workspace);
long endTimeresultSetMap = System.currentTimeMillis();
* used to add the resultset and the total no of content items
* to the binder.
addResultSetToBinder(resultSetMap , false);
long endTime = System.currentTimeMillis();
if (perfLogEnable.equalsIgnoreCase("true"))
Log.info("Time taken to execute " +
"getTemplateInformation=" +
(endTimeTemplate - startTime)+
"ms binderMap=" +
(endTimebinderMap - startTime)+
"ms contentFetchManager=" +
(endTimeFetchManager - startTime)+
"ms resultSetMap=" +
(endTimeresultSetMap - startTime) +
"ms getManager:getAllFolderContentItems = " +
(endTime - startTime) +
"ms overallTime=" +
(endTime - firstStartTime) +
"ms folderID =" +
collectionID);
Edited by: 838623 on Feb 22, 2011 1:43 AMHi.
The Select statment accessing MSEG Table is Slow Many a times.
To Improve the performance of MSEG.
1.Check for the proper notes in the Service Market Place if you are working for CIN version.
2.Index the MSEG table
2.Check and limit the Columns in the Select statment .
Possible Way.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO CORRESPONDING FIELDS OF TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' .
Delete itab where itab EQ '5002361303'
Delete itab where itab EQ '5003501080'
Delete itab where itab EQ '5002996300'
Delete itab where itab EQ '5002996407'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003493186'
Delete itab where itab EQ '5002720583'
Delete itab where itab EQ '5002928122'
Delete itab where itab EQ '5002628263'.
Select
Regards
Bala.M
Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM -
Post Goods Issue (VL06O) - taking more time approximate 30 to 45 minutes
Dear Sir,
While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
Kindly provide suitable solution for the same.
Regards,
Vijay SanguriHi
See Note 113048 - Collective note on delivery monitor and search notes related with performance.
Do a trace with tcode ST05 (look for help from a basis consultant) and search the bottleneck. Search possible sources of performance problems in userexits, enhancements and so on.
I hope this helps you
Regards
Eduardo -
Create index is taking more time
Hi,
One of the concurrent program is taking more time , We generate the trace file and found that the create index is taking more time.
Below is from the trace file and such type of index creation is happening lot of time in Oracle standard program.
Can somebody let me know why there is a big difference between cpu and elapse time.
We are seeing the PX Deq: Execute Reply Event as well.look idle time for database.
Please let me know which parameter of the database is affecting this.
CREATE INDEX ITEM_CATEGORIES_N2_BD9 ON ITEM_CATEGORIES_BD9(CATEGORY_SET_ID,
SR_CATEGORY_ID,ORGANIZATION_ID,SR_INSTANCE_ID) PARALLEL TABLESPACE MSCX
STORAGE( INITIAL 40960 NEXT 33554432 PCTINCREASE 0) PCTFREE 10 INITRANS 11
MAXTRANS 255
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 3 0 0
Execute 1 0.35 364.82 131168 117945 60324 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.35 364.83 131168 117948 60324 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 80 (recursive depth: 2)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
reliable message 1 0.00 0.00
enq: KO - fast object checkpoint 1 0.01 0.01
PX Deq: Join ACK 6 0.00 0.00
PX Deq Credit: send blkd 112 0.00 0.01
PX qref latch 7 0.00 0.00
PX Deq: Parse Reply 3 0.00 0.00
PX Deq: Execute Reply 604 1.96 364.42
log file sync 1 0.00 0.00
PX Deq: Signal ACK 1 0.00 0.00
latch: session allocation 2 0.00 0.00
Regards,user12121524 wrote:
CREATE INDEX ITEM_CATEGORIES_N2_BD9 ON ITEM_CATEGORIES_BD9(CATEGORY_SET_ID,
SR_CATEGORY_ID,ORGANIZATION_ID,SR_INSTANCE_ID) PARALLEL TABLESPACE MSCX
STORAGE( INITIAL 40960 NEXT 33554432 PCTINCREASE 0) PCTFREE 10 INITRANS 11
MAXTRANS 255
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 3 0 0
Execute 1 0.35 364.82 131168 117945 60324 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.35 364.83 131168 117948 60324 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 80 (recursive depth: 2)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
reliable message 1 0.00 0.00
enq: KO - fast object checkpoint 1 0.01 0.01
PX Deq: Join ACK 6 0.00 0.00
PX Deq Credit: send blkd 112 0.00 0.01
PX qref latch 7 0.00 0.00
PX Deq: Parse Reply 3 0.00 0.00
PX Deq: Execute Reply 604 1.96 364.42
log file sync 1 0.00 0.00
PX Deq: Signal ACK 1 0.00 0.00
latch: session allocation 2 0.00 0.00
What you've given us is the query co-ordinator trace, which basically tells us that the the coordinator waited 364 seconds for the PX slaves to tell it that they had completed their tasks ("PX Deq: Execute Reply" time). You need to look at the slave traces to find out where they spent their time - and that's probably not going to be easy if there are lots of parallel pieces of processing going on.
If you want to do some debugging (in general) one option is to add a query against V$pq_tqstat after each piece of parallel processing and log the results to a named file, or write them to a table with a tag, as this will tell you how many slaves were involved, how, and what the distribution of work and time was.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Level1 backup is taking more time than Level0
The Level1 backup is taking more time than Level0, I really am frustated how could it happen. I have 6.5GB of database. Level0 took 8 hrs but level1 is taking more than 8hrs . please help me in this regard.
Ogan Ozdogan wrote:
Charles,
By enabling the block change tracking will be indeed faster than before he have got. I think this does not address the question of the OP unless you are saying the incremental backup without the block change tracking is slower than a level 0 (full) backup?
Thank you in anticipating.
OganOgan,
I can't explain why a 6.5GB level 0 RMAN backup would require 8 hours to complete (maybe a very slow destination device connected by 10Mb/s Ethernet) - I would expect that it should complete in a couple of minutes.
An incremental level 1 backup without a block change tracking file could take longer than a level 0 backup. I encountered a good written description of why that could happen, but I can't seem to locate the source at the moment. The longer run time might have been related to the additional code paths required to constantly compare the SCN of each block, and the variable write rate which may affect some devices, such as a tape device.
A paraphrase from the book "Oracle Database 10g RMAN Backup & Recovery"
"Incremental backups must check the header of each block to discover if it has changed since the last incremental backup - that means an incremental backup may not complete much faster than a full backup."
Charles Hooper
Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Which statement is taking more time in a block?
Hi,
I have a plsql block with thousands of sql statements in it, line by line.
My query is how can i know which particular sql statement is taking more time.
Thanks,
VInod910575 wrote:
This is not the answer i expected.Having a 1000 SQLs statements in a single PL/SQL does not make any sense. Fact.
This is not how one modularise software. And modularisation is a critically important fundamental.
Each SQL means I/O. I/O is the single most expensive database operation. More I/O means slower performance.
So why is a 1000+ SQL statements needed? Do these pull data into PL/SQL? If so, why? What does PL/SQL do with the data returned by these SQLs? Do the SQLs use bind variables? Can the SQLs be combined? What are the business requirements that need to be met?
I just want to find which sql statement is taking more time in a procedure/bock.Not the correct approach. Performance need to look at the COMPLETE process. Not a single step in the process.
Performance is a result of a sound design and proper implementation of this design into programming language code. It is not focusing on a single line of code and saying that is the problem and need to be fixed. The problem can be in the design. Can be in how that was written. And not in a single line of code.
And seeing that you have 1000+ SQL statements in a single PL/SQL block - that IS an indication of a basic design flaw. -
Dataloading taking more time from PSA to infocube through DTP
Hi All,
when i am loading data to infocube through PSA with DTP, it taking more time. Its taken already 4 days to load. Irrespective of the data the previous loads were completed in 2-3 hrs. This is the first time am facing this issue.
Note: There is a start routine written in Transformations
Plz let me know how to identify whether the code is written in global declaration or local. If so what is the procedure to correct.
Thanks,
JackHi Jack,
To improve the performance of the data load, you can do the below:
1. Compress Old data
2. Delete and Rebuild Indexes
3. Read with binary search
4. Do no use LOOP withing LOOP.
5. Check sy-subrc after read etc.
Hope this helps you.
-Vikram -
Hi Experts,
I have joined two table prps and proj by inner join . But it is taking more time.
pls advice.
the code is:
select prps~posid
prps~post1
prps~objnr
prps~pbukr
prps~prctr
prps~erdat
prps~zz_can
into table itab_prps_all
from proJ inner join prps
on projpspnr = prpspsphi
where prps~posid IN sel_wbs
AND prps~pbukr IN sel_comp
AND prps~prctr IN sel_prct
AND prps~erdat IN sel_date
and proj~pprof NE wa_pprof.
thanks in advance,
SGhi,
plz use FOR ALL ENTRIES IN , it will increase performance of code
example:
Exporting all flight data for a specified departure city. The relevant airlines and flight numbers are first put in an internal table entry_tab, which is evaluated in the WHERE condition of the subsquent SELECT statement.
PARAMETERS p_city TYPE spfli-cityfrom.
TYPES: BEGIN OF entry_tab_type,
carrid TYPE spfli-carrid,
connid TYPE spfli-connid,
END OF entry_tab_type.
DATA: entry_tab TYPE TABLE OF entry_tab_type,
sflight_tab TYPE SORTED TABLE OF sflight
WITH UNIQUE KEY carrid connid fldate.
SELECT carrid connid
FROM spfli
INTO CORRESPONDING FIELDS OF TABLE entry_tab
WHERE cityfrom = p_city.
SELECT carrid connid fldate
FROM sflight
INTO CORRESPONDING FIELDS OF TABLE sflight_tab
FOR ALL ENTRIES IN entry_tab
WHERE carrid = entry_tab-carrid AND
connid = entry_tab-connid.
thanks and regards
rahul sharma
Edited by: RAHUL SHARMA on Sep 18, 2008 7:24 AM
Edited by: RAHUL SHARMA on Sep 18, 2008 7:33 AM
Edited by: RAHUL SHARMA on Sep 18, 2008 7:34 AM -
PGI Taking more time approximate 30 to 45 minutes
Dear Sir,
While doing post goods issue against delivery document system is taking lots of time, this issue is very very urgent can any one resolved or provide suitable solution for solving this issue.
We creates every day approximate 160 sales order / delivery and goods issue against the same by using transaction code VL06O system is taking more time for PGI.
Kindly provide suitable solution for the same.
Regards,
Vijay SanguriHello Vijay,
I've just found the SAP Note 1459217 which definitively refers to your issue. Please have a look on it (see below the respective SAP note text).
In case you have question let me know!
Best Regards,
Marcel Mizt
Symptom
Long runtimes occur when using transaction VL06G or VL06O in order to post goods issue (PGI) deliveries.
Poor response times occur when using transaction VL06G or VL06O in order to PGI deliveries.
Poor performance occurs with transaction VL06G / VL06O.
Performance issues occur with transaction VL06G / VL06O.
Environment
SAP R/3 All Release Levels
Reproducing the Issue
Execute transaction VL06O.
Choose "For Goods Issue". (Transaction VL06G).
Long runtimes occur.
Cause
There are too many documents in the database that need to be accessed.
The customising settings in the activity "set updating of partner index" are not activated.
(IMG -> Logistics Execution -> Shipping -> Delivery List -> Set Updating Of Partner Index).
Resolution
If there are too many documents in the database to access, archiving them improves the performance of VL06G.
The customising settings in the activity "set updating of partner index" can be updated to improve the performance of VL06G. (IMG -> Logistics Execution -> Shipping -> Delivery List -> Set Updating Of Partner Index). In this transaction, check the entries for the transaction group 6 (= delivery). The effect of these settings is that the table VLKPA (SD index: deliveries by partner functions) is only filled with entries based on the partner functions listed (for example WE = ship-to party). In transaction VL060 the system is checking this customising in order to access the table VBPA or VLKPA.
If you change the settings of the activity "updating of partner index", run the report RVV05IVB to reorganize the index seleting only partner index in the delivery section of the screen. (see note 128947).
Flag the checkbox "display forwarding agent" (available in the display options section of the selection screen). When the list is generated, use the "set filter" functionality (menu path: edit -> set filter) in order to select the deliveries correspondng to one forwarding agent. -
Query in timesten taking more time than query in oracle database
Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE DATE,
IF_SYSTEM VARCHAR2(32) NOT NULL,
GROUPING_ID TT_BIGINT,
TIME_DIM_ID TT_INTEGER NOT NULL,
BUSINESS_DIM_ID TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID TT_INTEGER NOT NULL,
INSTR_DIM_ID TT_INTEGER NOT NULL,
EXECUTION_DIM_ID TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS TT_BIGINT,
FILLED_QUANTITY TT_BIGINT,
CNT_FILLED_QUANTITY TT_BIGINT,
QUANTITY TT_BIGINT,
CNT_QUANTITY TT_BIGINT,
COMMISSION BINARY_FLOAT,
CNT_COMMISSION TT_BIGINT,
FILLS_NUMBER TT_BIGINT,
CNT_FILLS_NUMBER TT_BIGINT,
AGGRESSIVE_FILLS TT_BIGINT,
CNT_AGGRESSIVE_FILLS TT_BIGINT,
NOTIONAL BINARY_FLOAT,
CNT_NOTIONAL TT_BIGINT,
TOTAL_PRICE BINARY_FLOAT,
CNT_TOTAL_PRICE TT_BIGINT,
CANCELLED_ORDERS_COUNT TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT TT_BIGINT,
ROUTED_ORDERS_NO TT_BIGINT,
CNT_ROUTED_ORDERS_NO TT_BIGINT,
ROUTED_LIQUIDITY_QTY TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY TT_BIGINT,
REMOVED_LIQUIDITY_QTY TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY TT_BIGINT,
ADDED_LIQUIDITY_QTY TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY TT_BIGINT,
AGENT_CHARGES BINARY_FLOAT,
CNT_AGENT_CHARGES TT_BIGINT,
CLEARING_CHARGES BINARY_FLOAT,
CNT_CLEARING_CHARGES TT_BIGINT,
EXECUTION_CHARGES BINARY_FLOAT,
CNT_EXECUTION_CHARGES TT_BIGINT,
TRANSACTION_CHARGES BINARY_FLOAT,
CNT_TRANSACTION_CHARGES TT_BIGINT,
ORDER_MANAGEMENT BINARY_FLOAT,
CNT_ORDER_MANAGEMENT TT_BIGINT,
SETTLEMENT_CHARGES BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES TT_BIGINT,
RECOVERED_AGENT BINARY_FLOAT,
CNT_RECOVERED_AGENT TT_BIGINT,
RECOVERED_CLEARING BINARY_FLOAT,
CNT_RECOVERED_CLEARING TT_BIGINT,
RECOVERED_EXECUTION BINARY_FLOAT,
CNT_RECOVERED_EXECUTION TT_BIGINT,
RECOVERED_TRANSACTION BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION TT_BIGINT,
RECOVERED_ORD_MGT BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT TT_BIGINT,
RECOVERED_SETTLEMENT BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT TT_BIGINT,
CLIENT_AGENT BINARY_FLOAT,
CNT_CLIENT_AGENT TT_BIGINT,
CLIENT_ORDER_MGT BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT TT_BIGINT,
CLIENT_EXEC BINARY_FLOAT,
CNT_CLIENT_EXEC TT_BIGINT,
CLIENT_TRANS BINARY_FLOAT,
CNT_CLIENT_TRANS TT_BIGINT,
CLIENT_CLEARING BINARY_FLOAT,
CNT_CLIENT_CLEARING TT_BIGINT,
CLIENT_SETTLE BINARY_FLOAT,
CNT_CLIENT_SETTLE TT_BIGINT,
CHARGEABLE_TAXES BINARY_FLOAT,
CNT_CHARGEABLE_TAXES TT_BIGINT,
VENDOR_CHARGE BINARY_FLOAT,
CNT_VENDOR_CHARGE TT_BIGINT,
ROUTING_CHARGES BINARY_FLOAT,
CNT_ROUTING_CHARGES TT_BIGINT,
RECOVERED_ROUTING BINARY_FLOAT,
CNT_RECOVERED_ROUTING TT_BIGINT,
CLIENT_ROUTING BINARY_FLOAT,
CNT_CLIENT_ROUTING TT_BIGINT,
TICKET_CHARGES BINARY_FLOAT,
CNT_TICKET_CHARGES TT_BIGINT,
RECOVERED_TICKET_CHARGES BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns ORACLE TimesTen
1 1.05 0.94
2 1.07 1.47
3 2.04 1.8
4 2.06 2.08
5 2.09 2.4
6 3.01 2.67
7 4.02 3.06
8 4.03 3.37
9 4.04 3.62
10 4.06 4.02
11 4.08 4.31
12 4.09 4.61
13 5.01 4.76
14 5.02 5.06
15 5.04 5.25
16 5.05 5.48
17 5.08 5.84
18 6 6.21
19 6.02 6.34
20 6.04 6.75 -
Custom Report taking more time to complete Normat
Hi All,
Custom report(Aging Report) in oracle is taking more time to complete Normal.
In one instance, the same report is taking 5 min and the other instance this is taking 40-50 min to complete.
We have enabled the trace and checked the trace file, but all the queries are working fine.
Could you please suggest me regarding this issue.
Thanks in advance!!TKPROF: Release 10.1.0.5.0 - Production on Tue Jun 5 10:49:32 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Sort options: prsela exeela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
Error in CREATE TABLE of EXPLAIN PLAN table: APPS.prof$plan_table
ORA-00922: missing or invalid option
parse error offset: 1049
EXPLAIN PLAN option disabled.
SELECT DISTINCT OU.ORGANIZATION_ID , OU.BUSINESS_GROUP_ID
FROM
HR_OPERATING_UNITS OU WHERE OU.SET_OF_BOOKS_ID =:B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.05 11 22 0 3
total 3 0.00 0.05 11 22 0 3
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
3 HASH UNIQUE (cr=22 pr=11 pw=0 time=52023 us cost=10 size=66 card=1)
3 NESTED LOOPS (cr=22 pr=11 pw=0 time=61722 us)
3 NESTED LOOPS (cr=20 pr=11 pw=0 time=61672 us cost=9 size=66 card=1)
3 NESTED LOOPS (cr=18 pr=11 pw=0 time=61591 us cost=7 size=37 card=1)
3 NESTED LOOPS (cr=16 pr=11 pw=0 time=61531 us cost=7 size=30 card=1)
3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=11 pr=9 pw=0 time=37751 us cost=6 size=22 card=1)
18 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK1 (cr=1 pr=1 pw=0 time=17135 us cost=1 size=0 card=18)(object id 43610)
3 TABLE ACCESS BY INDEX ROWID HR_ALL_ORGANIZATION_UNITS (cr=5 pr=2 pw=0 time=18820 us cost=1 size=8 card=1)
3 INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK (cr=2 pr=0 pw=0 time=26 us cost=0 size=0 card=1)(object id 43657)
3 INDEX UNIQUE SCAN HR_ALL_ORGANIZATION_UNTS_TL_PK (cr=2 pr=0 pw=0 time=32 us cost=0 size=7 card=1)(object id 44020)
3 INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=2 pr=0 pw=0 time=52 us cost=1 size=0 card=1)(object id 330960)
3 TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=2 pr=0 pw=0 time=26 us cost=2 size=29 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 11 0.01 0.05
asynch descriptor resize 2 0.00 0.00
INSERT INTO FND_LOG_MESSAGES ( ECID_ID, ECID_SEQ, CALLSTACK, ERRORSTACK,
MODULE, LOG_LEVEL, MESSAGE_TEXT, SESSION_ID, USER_ID, TIMESTAMP,
LOG_SEQUENCE, ENCODED, NODE, NODE_IP_ADDRESS, PROCESS_ID, JVM_ID, THREAD_ID,
AUDSID, DB_INSTANCE, TRANSACTION_CONTEXT_ID )
VALUES
( SYS_CONTEXT('USERENV', 'ECID_ID'), SYS_CONTEXT('USERENV', 'ECID_SEQ'),
:B16 , :B15 , SUBSTRB(:B14 ,1,255), :B13 , SUBSTRB(:B12 , 1, 4000), :B11 ,
NVL(:B10 , -1), SYSDATE, FND_LOG_MESSAGES_S.NEXTVAL, :B9 , SUBSTRB(:B8 ,1,
60), SUBSTRB(:B7 ,1,30), SUBSTRB(:B6 ,1,120), SUBSTRB(:B5 ,1,120),
SUBSTRB(:B4 ,1,120), :B3 , :B2 , :B1 ) RETURNING LOG_SEQUENCE INTO :O0
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 20 0.00 0.03 4 1 304 20
Fetch 0 0.00 0.00 0 0 0 0
total 21 0.00 0.03 4 1 304 20
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 LOAD TABLE CONVENTIONAL (cr=1 pr=4 pw=0 time=36498 us)
1 SEQUENCE FND_LOG_MESSAGES_S (cr=0 pr=0 pw=0 time=24 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 4 0.01 0.03
SELECT MESSAGE_TEXT, MESSAGE_NUMBER, TYPE, FND_LOG_SEVERITY, CATEGORY,
SEVERITY
FROM
FND_NEW_MESSAGES M, FND_APPLICATION A WHERE :B3 = M.MESSAGE_NAME AND :B2 =
M.LANGUAGE_CODE AND :B1 = A.APPLICATION_SHORT_NAME AND M.APPLICATION_ID =
A.APPLICATION_ID
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.03 4 12 0 2
total 5 0.00 0.03 4 12 0 2
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 NESTED LOOPS (cr=6 pr=2 pw=0 time=15724 us cost=3 size=134 card=1)
1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=20 us cost=1 size=9 card=1)
1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
1 TABLE ACCESS BY INDEX ROWID FND_NEW_MESSAGES (cr=4 pr=2 pw=0 time=15693 us cost=2 size=125 card=1)
1 INDEX UNIQUE SCAN FND_NEW_MESSAGES_PK (cr=3 pr=1 pw=0 time=6386 us cost=1 size=0 card=1)(object id 34367)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 4 0.00 0.03
DELETE FROM MO_GLOB_ORG_ACCESS_TMP
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.02 3 4 4 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 3 4 4 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
0 DELETE MO_GLOB_ORG_ACCESS_TMP (cr=4 pr=3 pw=0 time=29161 us)
1 TABLE ACCESS FULL MO_GLOB_ORG_ACCESS_TMP (cr=3 pr=2 pw=0 time=18165 us cost=2 size=13 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 3 0.01 0.02
SET TRANSACTION READ ONLY
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.01 0 0 0 0
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
begin Fnd_Concurrent.Init_Request; end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 148 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 148 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
log file sync 1 0.01 0.01
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
declare X0rv BOOLEAN; begin X0rv := FND_INSTALLATION.GET(:APPL_ID,
:DEP_APPL_ID, :STATUS, :INDUSTRY); :X0 := sys.diutil.bool_to_int(X0rv);
end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 9 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 9 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 8 0.00 0.00
SQL*Net message from client 8 0.00 0.00
begin fnd_global.bless_next_init('FND_PERMIT_0000');
fnd_global.initialize(:session_id, :user_id, :resp_id, :resp_appl_id,
:security_group_id, :site_id, :login_id, :conc_login_id, :prog_appl_id,
:conc_program_id, :conc_request_id, :conc_priority_request, :form_id,
:form_application_id, :conc_process_id, :conc_queue_id, :queue_appl_id,
:server_id); fnd_profile.put('ORG_ID', :org_id);
fnd_profile.put('MFG_ORGANIZATION_ID', :mfg_org_id);
fnd_profile.put('MFG_CHART_OF_ACCOUNTS_ID', :coa);
fnd_profile.put('APPS_MAINTENANCE_MODE', :amm); end;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 8 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 8 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SELECT FPI.STATUS, FPI.INDUSTRY, FPI.PRODUCT_VERSION, FOU.ORACLE_USERNAME,
FPI.TABLESPACE, FPI.INDEX_TABLESPACE, FPI.TEMPORARY_TABLESPACE,
FPI.SIZING_FACTOR
FROM
FND_PRODUCT_INSTALLATIONS FPI, FND_ORACLE_USERID FOU, FND_APPLICATION FA
WHERE FPI.APPLICATION_ID = FA.APPLICATION_ID AND FPI.ORACLE_ID =
FOU.ORACLE_ID AND FA.APPLICATION_SHORT_NAME = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 7 0 1
total 4 0.00 0.00 0 7 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 NESTED LOOPS (cr=7 pr=0 pw=0 time=89 us)
1 NESTED LOOPS (cr=6 pr=0 pw=0 time=93 us cost=4 size=76 card=1)
1 NESTED LOOPS (cr=5 pr=0 pw=0 time=77 us cost=3 size=67 card=1)
1 TABLE ACCESS BY INDEX ROWID FND_APPLICATION (cr=2 pr=0 pw=0 time=19 us cost=1 size=9 card=1)
1 INDEX UNIQUE SCAN FND_APPLICATION_U3 (cr=1 pr=0 pw=0 time=9 us cost=0 size=0 card=1)(object id 33993)
1 TABLE ACCESS BY INDEX ROWID FND_PRODUCT_INSTALLATIONS (cr=3 pr=0 pw=0 time=51 us cost=2 size=58 card=1)
1 INDEX RANGE SCAN FND_PRODUCT_INSTALLATIONS_PK (cr=2 pr=0 pw=0 time=27 us cost=1 size=0 card=1)(object id 22583)
1 INDEX UNIQUE SCAN FND_ORACLE_USERID_U1 (cr=1 pr=0 pw=0 time=7 us cost=0 size=0 card=1)(object id 22597)
1 TABLE ACCESS BY INDEX ROWID FND_ORACLE_USERID (cr=1 pr=0 pw=0 time=7 us cost=1 size=9 card=1)
SELECT P.PID, P.SPID, AUDSID, PROCESS, SUBSTR(USERENV('LANGUAGE'), INSTR(
USERENV('LANGUAGE'), '.') + 1)
FROM
V$SESSION S, V$PROCESS P WHERE P.ADDR = S.PADDR AND S.AUDSID =
USERENV('SESSIONID')
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 173 (recursive depth: 1)
Rows Row Source Operation
1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1883 us cost=1 size=108 card=2)
1 HASH JOIN (cr=0 pr=0 pw=0 time=1869 us cost=1 size=100 card=2)
1 NESTED LOOPS (cr=0 pr=0 pw=0 time=1059 us cost=0 size=58 card=2)
182 FIXED TABLE FULL X$KSLWT (cr=0 pr=0 pw=0 time=285 us cost=0 size=1288 card=161)
1 FIXED TABLE FIXED INDEX X$KSUSE (ind:1) (cr=0 pr=0 pw=0 time=617 us cost=0 size=21 card=1)
181 FIXED TABLE FULL X$KSUPR (cr=0 pr=0 pw=0 time=187 us cost=0 size=10500 card=500)
1 FIXED TABLE FIXED INDEX X$KSLED (ind:2) (cr=0 pr=0 pw=0 time=4 us cost=0 size=4 card=1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
asynch descriptor resize 2 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 6 0.00 0.00 0 0 0 0
Execute 6 0.01 0.02 0 165 0 4
Fetch 1 0.00 0.00 0 0 0 1
total 13 0.01 0.02 0 165 0 5
Misses in library cache during parse: 0
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 37 0.00 0.00
SQL*Net message from client 37 1.21 2.19
log file sync 1 0.01 0.01
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 49 0.00 0.00 0 0 0 0
Execute 89 0.01 0.07 7 38 336 24
Fetch 29 0.00 0.09 15 168 0 27
total 167 0.02 0.16 22 206 336 51
Misses in library cache during parse: 3
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
asynch descriptor resize 6 0.00 0.00
db file sequential read 22 0.01 0.14
48 user SQL statements in session.
1 internal SQL statements in session.
49 SQL statements in session.
0 statements EXPLAINed in this session.
Trace file compatibility: 10.01.00
Sort options: prsela exeela fchela
1 session in tracefile.
48 user SQL statements in trace file.
1 internal SQL statements in trace file.
49 SQL statements in trace file.
48 unique SQL statements in trace file.
928 lines in trace file.
1338833753 elapsed seconds in trace file. -
Report rdf with size 8mb taking more time to open
Hello All,
I have a rdf ( reports 6i) report with size 8.5mb taking more time to open and more time to access each field.
Please let me know how do i solve this issue.
Please do the needful.
Thanks.Thanks for immediate response.
pls let me know how do i know this.
Right now i have the below details from report->help
Report Builder 6.0.8.11.3
ORACLE Server Release 8.0.6.0.0
Oracle Procedure Builder 6.0.8.11.0
Oracle ORACLE PL/SQL V8.0.6.0.0 - Production
Oracle CORE Version 4.0.6.0.0 - Production
Oracle Tools Integration Services 6.0.8.10.2
Oracle Tools Common Area 6.0.5.32.1
Oracle Toolkit 2 for Windows 32-bit platforms 6.0.5.35.0
Resource Object Store 6.0.5.0.1
Oracle Help 6.0.5.35.0
Oracle Sqlmgr 6.0.8.11.3
Oracle Query Builder 6.0.7.0.0 - Production
PL/SQL Editor (c) WinMain Software (www.winmain.com), v1.0 (Production)
Oracle ZRC 6.0.8.11.3
Oracle Express 6.0.8.3.5
Oracle XML Parser 1.0.2.1.0 Production
Oracle Virtual Graphics System 6.0.5.35.0
Oracle Image 6.0.5.34.0
Oracle Multimedia Widget 6.0.5.34.0
Oracle Tools GUI Utilities 6.0.5.35.0
Thanks
Edited by: Abdul Khan on Jan 26, 2010 11:54 PM -
XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD
HI
XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD.
The sql has 5 union clauses.
It takes 20-30 minutes in TOAD compared to running through Concurrent Program in XML Publisher in EBS taking around 4-5 hours.
The Scalable Flag at report level is turned on with the JVM options set to -Xmx1024m -Xmx1024m in Concurrent Program definition.
Other configurations for Data Template like XSLT, Scalable, Optimization are turned on though didn't bounce the OPP Server for these to take effect as I am not sure whether it is needed.
Thanks in advance for your help.But the question is that how come it is working in TOAD and takes only 15-20 minutes?
with initialization of session ?
what about sqlplus ?
Do I have to set up the the temp directory for the XML Publisher report to make it faster?
look at
R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues (Doc ID 1410160.1)
BI Publisher - Troubleshooting Oracle Business Intelligence (XML) Publisher For The Oracle E-Business Suite (Doc ID 364547.1) -
INSTR() function is taking more time
im using the below query select query in my procedure.
select distinct SUPPLIER_CIRCUIT_ID from SUPPLIER_DATA
where INSTR(i.SYSTEM_CIRCUIT_ID,SUPPLIER_CIRCUIT_ID) > 0;
I am taking SYSTEM_CIRCUIT_ID in cursor.
This query is taking more time. Is that possiblt to create function based index and speed up the query?Hi,
Welcome to the forum!
993620 wrote:
im using the below query select query in my procedure.
select distinct SUPPLIER_CIRCUIT_ID from SUPPLIER_DATA
where INSTR(i.SYSTEM_CIRCUIT_ID,SUPPLIER_CIRCUIT_ID) > 0;
I am taking SYSTEM_CIRCUIT_ID in cursor.Show exactly what you're doing.
Whenever you have a problem, post a complete test script that people can run to re-create the problem and test their ideas.
See te forum FAQ {message:id=9360002}
This query is taking more time. Is that possiblt to create function based index and speed up the query?Sorry, unless you doing something very specific (such as alwyas looking for the same sub-string) then a function-based index won't help.
Oracle sells a separate product, called Oracle Text, for this kind of searching.
You might try LIKE:
WHERE i.SYSTEM_CIRCUIT_ID LIKE '%' || SUPPLIER_CIRCUIT_ID || '%'If you're using a cursor, then that's probably slowing the process down much more than the part you're showing. -
Snapshot Refresh taking More Time
Dear All,
We are facing a Snapshot refresh problem currently in Production Environment.
Oracle Version : Oracle8i Enterprise Edition Release 8.1.6.1.0
Currently we have created a Snapshot on a Join with 2 remote tables using Synonyms.
ex:
CREATE SNAPSHOT XYZ REFRESH COMPLETE WITH ROWID
AS
SELECT a.* FROM SYN1 a, SYN2 b
Where b.ACCT_NO=a.ACCT_NO;
We have created a Index on the above Snapshot XYZ.
Create index XYZ_IDX1 on XYZ (ACCT_NO);
a. The Explain plan of the above query shows the Index Scan on SYN1.
If we query above Select Statement,it hardly takes 2 seconds to exedute.
b. But the Complete Refresh of Snapshot XYZ is almost taking 20 Mins for just truncating and inserting 500 records and is generating huge Disk Reads as SYN1 in remote table consists of 32 Million records whereas SYN2 contains only 500 Records.
If we truncate and insert inot a table as performed by the Complete refresh of Snapshot,it hardly takes 4 Seconds to refresh the table.
Please let me know what might be the possible reasons for the Complete refresh of Snapshot taking more time.Dear All,
While refreshing the Snapshot XYZ,I could find the following.
a. Sort/Merge operation was performed while inserting the data into Snapshot.
INSERT /*+ APPEND */ INTO "XYZ"
SELECT a.* FROM SYN1 a, SYN2 b Where b.ACCT_NO=a.ACCT_NO;
The above operation performed huge disk reads.
b. By Changing the session parameter sort_area_size ,the time decreased by 50% but still the disk reads are huge.
I would like to know why Sort/Merge Operation is performed in the above Insert?
Edited by: Prashanth Deshmukh on Mar 13, 2009 10:54 AM
Edited by: Prashanth Deshmukh on Mar 13, 2009 10:55 AM
Maybe you are looking for
-
How to get week of a year in American Standard
Hi, I know it might be a repetitive question, but did not find a convincing solution to it in any of the previous threads. I am looking for a function (user defined, if someone has already written it) to return me the week of the year, in American St
-
The technical name is Elastic Time - the time is now
The technical name is Elastic Time - the time is now. http://www.sonicstate.com/news/shownews.cfm?newsid=5402 This is the future of audio editing, and I sincerely hope Logic will provide this same (or superior) functionality very soon, as it is the w
-
Easy : vendor user ID request on logon screen
Hello gentlemen, A very easy question to win 10 points. I would like registered vendors to be able to request user IDs on EBP logon screen. But I can only see ID request for internal employees. Should we give the vendors an other URL or do we have to
-
TS4006 I lost my i pad mini, how to locate it?
I lost my i pad mini, how to locate it?
-
Someone logged into my account
Hi! Today I logged into my skype account after 16 days (since October 24) without logging in. After logging in i saw that my status had changed. I've never told my password to anyone, so I really don't know who has logged into my account. I was check