Jdev 11g taking more time to display data on screen
HI
I am using Jdeveloper 11g , I am experiencing very slow when the web page gets loaded, The screen has ADF table with the 20 columns and all columns are editable,sortable and filterable. Approximately It is taking 8 to 11 seconds just to display data on the screen. My question is any idea why it is taking this time any work around there to resolve this?
-> Second question on the same issue, Automatically the page is getting submitted whenever I enter / modify some value on the screen and it is automatically submitting instead I choose the save button to save the data. Any idea why it is behaving like this?
Thanks
even I have noticed the same problem. if you want to do something in the on load it takes time. the problem is only while loading for the first time, after that it's okay.
Similar Messages
-
Taking more time for retreving data from nested table
Hi
we have two databases db1 and db2,in database db2 we have number of nested tables were there.
Now the problem is we had link between two databases,whenever u firing the any query in db1 internally it's going to acces nested tables in db2.
For feching records it's going to take much more time even records are less in table . what would be the reason.
plz help me daliy we are facing the problem regarding the samePlease avoid duplicate thread :
quaries taking more time
Nicolas.
+< mod. action : thread locked>+ -
BAM reports taking more time while fetching data from EDS
I have created an external Data Source(EDS) in the BAM, and when i create an object with that EDS, it is taking very long to fetch data from the EDS( more than 20 mins.), but as the Database is installed on my local system, when i fire a query there, it don't take much time. Please help me in this, what could be the possible reason?
Your messaging gateway question would be better addressed in the SQL PL/SQL forum
All Places > Database > Oracle Database + Options > SQL and PL/SQL > Discussions -
CDP Performance Issue-- Taking more time fetch data
Hi,
I'm working on Stellent 7.5.1.
For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
public void getManager(final HashMap binderMap)
throws VistaInvalidInputException, VistaDataNotFoundException,
DataException, ServiceException, VistaTemplateException
String collectionID =
getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
long firstStartTime = System.currentTimeMillis();
HashMap resultSetMap = null;
String isNonRecursive = getStringLocal(VistaFolderConstants
.ISNONRECURSIVE_KEY);
if (isNonRecursive != null
&& isNonRecursive.equalsIgnoreCase(
VistaContentFetchHelperConstants.STRING_TRUE))
VistaLibraryContentFetchManager libraryContentFetchManager =
new VistaLibraryContentFetchManager(
binderMap);
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
resultSetMap = libraryContentFetchManager
.getFolderContentItems(m_workspace);
* used to add the resultset to the binder.
addResultSetToBinder(resultSetMap , true);
else
long startTime = System.currentTimeMillis();
* isStandard is used to decide the call is for Standard or
* Extended.
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
String isStandard = getTemplateInformation(binderMap);
long endTimeTemplate = System.currentTimeMillis();
binderMap.put(VistaFolderConstants.IS_STANDARD,
isStandard);
long endTimebinderMap = System.currentTimeMillis();
VistaContentFetchManager contentFetchManager
= new VistaContentFetchManager(binderMap);
long endTimeFetchManager = System.currentTimeMillis();
resultSetMap = contentFetchManager
.getAllFolderContentItems(m_workspace);
long endTimeresultSetMap = System.currentTimeMillis();
* used to add the resultset and the total no of content items
* to the binder.
addResultSetToBinder(resultSetMap , false);
long endTime = System.currentTimeMillis();
if (perfLogEnable.equalsIgnoreCase("true"))
Log.info("Time taken to execute " +
"getTemplateInformation=" +
(endTimeTemplate - startTime)+
"ms binderMap=" +
(endTimebinderMap - startTime)+
"ms contentFetchManager=" +
(endTimeFetchManager - startTime)+
"ms resultSetMap=" +
(endTimeresultSetMap - startTime) +
"ms getManager:getAllFolderContentItems = " +
(endTime - startTime) +
"ms overallTime=" +
(endTime - firstStartTime) +
"ms folderID =" +
collectionID);
Edited by: 838623 on Feb 22, 2011 1:43 AMHi.
The Select statment accessing MSEG Table is Slow Many a times.
To Improve the performance of MSEG.
1.Check for the proper notes in the Service Market Place if you are working for CIN version.
2.Index the MSEG table
2.Check and limit the Columns in the Select statment .
Possible Way.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO CORRESPONDING FIELDS OF TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' .
Delete itab where itab EQ '5002361303'
Delete itab where itab EQ '5003501080'
Delete itab where itab EQ '5002996300'
Delete itab where itab EQ '5002996407'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003493186'
Delete itab where itab EQ '5002720583'
Delete itab where itab EQ '5002928122'
Delete itab where itab EQ '5002628263'.
Select
Regards
Bala.M
Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM -
Taking more time for loading Real Cost estimates
Dear Experts,
It is taking more time to load data into cude CO-PC: Product Cost Planning - Released Cost Estimates(0COPC_C09).The update mode is "Full Update"There are only 105607 records.For other areas there are more than this records,but it gets loaded easily.
I have problem only to this 0COPC_C09.Could anybody can guide me?.
Rgds
ACEsuresh.ratnaji wrote:
NAME TYPE VALUE
_optimizer_cost_based_transformation string OFF
filesystemio_options string asynch
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string choose
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
please let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?Suresh,
Any particular reason why you have a non-default value for a hidden parameter, optimizercost_based_transformation ?
On my 10.2.0.1 database, its default value is "linear". What happens when you reset the value of the hidden parameter to default? -
Oracle 11g: Oracle insert/update operation is taking more time.
Hello All,
In Oracle 11g (Windows 2008 32 bit environment) we are facing following issue.
1) We are inserting/updating data on some tables (4-5 tables and we are firing query with very high rate).
2) After sometime (say 15 days with same load) we are feeling that the Oracle operation (insert/update) is taking more time.
Query1: How to find actually oracle is taking more time in insert/updates operation.
Query2: How to rectify the problem.
We are having multithread environment.
Thanks
With Regards
Hemant.Liron Amitzi wrote:
Hi Nicolas,
Just a short explanation:
If you have a table with 1 column (let's say a number). The table is empty and you have an index on the column.
When you insert a row, the value of the column will be inserted to the index. To insert 1 value to an index with 10 values in it will be fast. It will take longer to insert 1 value to an index with 1 million values in it.
My second example was if I take the same table and let's say I insert 10 rows and delete the previous 10 from the table. I always have 10 rows in the table so the index should be small. But this is not correct. If I insert values 1-10 and then delete 1-10 and insert 11-20, then delete 11-20 and insert 21-30 and so on, because the index is sorted, where 1-10 were stored I'll now have empty spots. Oracle will not fill them up. So the index will become larger and larger as I insert more rows (even though I delete the old ones).
The solution here is simply revuild the index once in a while.
Hope it is clear.
Liron Amitzi
Senior DBA consultant
[www.dbsnaps.com]
[www.orbiumsoftware.com]Hmmm, index space not reused ? Index rebuild once a while ? That was what I understood from your previous post, but nothing is less sure.
This is a misconception of how indexes are working.
I would suggest the reading of the following interasting doc, they are a lot of nice examples (including index space reuse) to understand, and in conclusion :
http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf
"+Index Rebuild Summary+
+•*The vast majority of indexes do not require rebuilding*+
+•Oracle B-tree indexes can become “unbalanced” and need to be rebuilt is a myth+
+•*Deleted space in an index is “deadwood” and over time requires the index to be rebuilt is a myth*+
+•If an index reaches “x” number of levels, it becomes inefficient and requires the index to be rebuilt is a myth+
+•If an index has a poor clustering factor, the index needs to be rebuilt is a myth+
+•To improve performance, indexes need to be regularly rebuilt is a myth+"
Good reading,
Nicolas. -
Deletion of Data Mart Request taking more time
Hi all,
One of the Data Mart process(ODS to Infocube) has failed.
Im not able to delete the request in Infocube.
When delete option executed, its taking more time while im monitoring that job in SM37 and its not getting completed.
The details seen in the Job Log is as shown below,
Job started
Step 001 started (program RSDELPART1, variant &0000000006553, user ID SSRIN90)
Delete is running: Data target BUS_CONT, from 376,447 to 376,447
Please let me know your suggestions.
Thanks,
SowrabhHi,
How many records are there in that request. Usually when you try to delete out a request it takes more time. Depending on the data volume in the request the deletion time will vary.
Give it some more time and see if it finishes. To actually know if the job is doing anything, goto SM50/51 and look at whats happening.
Cheers,
Kedar -
SSRS report taking more time but fast in SSMS
SSRS report taking more time but fast in SSMS,We are binding a string which more than 43000 length each and total number records is 4000.
Plese do the needful.It is not possible to create index because it crosses 900 byte.Hi DBA5,
As per my understanding, when previewing the report, there are three options to affect the report performance. They are data retrieval time, processing time, rendering time.
The data retrieval takes time to retrieve the data from the server using a queries or stored procedures, the processing time takes time to process the data within the report using the operations in the layout of the report and the rendering time takes time
to display the information of the report and how it is displayed like excel, pdf, html formats. It is the reason why report take more time than in SSMS.
To improve the performance, we need to improve the three aspects. For the details, please see the great blog:http://blogs.msdn.com/b/mariae/archive/2009/04/16/quick-tips-for-monitoring-and-improving-performance-in-reporting-services.aspx
Hope this helps.
Regards,
Heidi Duan
Heidi Duan
TechNet Community Support -
hi all,
i am using Forms [32 Bit] Version 6.0.8.24.1 (Production)
i am ultimate task is to read the data from excel and insert into the table. if any duplication of records then stop the process and tell the user that which records are repating..
i have used ole2 package to read data from excel.
i have two pl/sql table for further process.
one table is to inser whatever the data come from excel. 2nd table is to check for the existance of current record with the previously inserted record and 3rd table is do store the duplicated records.
i am inserting each record as it is to the table1. in the table 2 checking for the existance of the record. if current record match with the previous record then store that record into the 3 record.
at the end if the error_table(3rd table) row count is 0 then no duplication then write the 1st table data into the forms and save,else display the table 3(error table) data into the form and say to the user that these records are duplicating.
but its taking more time time.
can anybody tell me the logic which reduces the time consuption(better performance)
Thanks..hi,
i don't think its a sql issue because if i entered manually(not through the excel upload) its just working fine. the form is database block. so i have not written any sql.
what i am duing is upload data from excel to plsql table check for the duplication if any duplication found display the error_plsql table else display the first table as i mentioned earlier. the 2nd table is to check for the existance of the current record(since i am reading from excel record by record).
Thanks.. -
Source System Extraction taking more time
Hi All,
I Have an Issue, please help me out in this.
We have two source systems in our process. One is working fine. We dont have any issue with that. But another source system taking more time to extract the data in the source system than usual. From few days only we are facing this problem. Record count is also normal. I am unable find root cause for this, please help me out regarding this issue.
It is Full update. We are using Custom program to extract the data. It is running daily.
Thanks in Advance.
Regards,
Suresh
Edited by: sureshd on Jul 14, 2011 6:04 PM
Edited by: sureshd on Jul 14, 2011 6:05 PMWhen a long running job is running in the source system side you can trace it through ST05. Simultaneously you can monitor the screen of SM50/SM66 to see what are the tables being accessed during the job.
After the loading is completed, you can deactivate the trace and then go to the Display trace option where you would see various number of SQL statements along with their tables. Check which particular statement is an expensive one by looking at the time stamp.
Then next job is to decide why that table is being accessed for so much of time and how to improve the run time of the table. check the fields inside the table and their selectivity and see wether any index already exist on that. You can do this by going into SE11. If index is not available on those specific fields ask your Basis Guy to create a secondary one on top of it which may help in better performance. hope it helps. -
Admin server taking more time to start
Hi All,
I installed OIM 11g R2 in windows server2008 ,
while starting the admin server it is taking the more time, .
Steps i have done.
changed memory parameters for setSOADomainEnv, setOIMDomainEnv, oimenv.bat files.
while starting the admin server it is showing the memory parameters are Xms768m.....xms1024m like that.
can anyone please suggest what changes needs to do to run fast
Thanks,
Valliuser623166 wrote:
Hi Gurus
my database oracle 10.2.0.3 in AIX
I have started expdp to export selective around 130 tables but it is taking more time to start , almost 20 min passed since it start.
$ expdp system/*** parfile=parfile.par
Export: Release 10.2.0.3.0 - 64bit Production on Friday, 22 June, 2012 17:21:26
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_TABLE_02": system/******** parfile=parfile.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
I am not sure that this is enough information for us to suggest something. For starters, can you try selecting less tables than 120 and see what's going on?
Aman.... -
Clicking Drill Down Plus Sympol its taking more time
Hi ,
In my drill-down report having 2000 records while initially it takes 25 seconds to render the report after that when i click Plus symbol it takes again 20 secs to display the data.My question is in drill-down report for loading data it will hit one
time to fetch data.So why its taking more time while clicking plus symbol the data already exits in datasets.
So please tell me where i am doing mistake.
Regards,
HariKanHi HariKan,
According to your description, when you preview the drill down report, after you click plus sign, it takes 20 seconds to expand the detail information.
The total time to generate a reporting server report (RDL) can be divided into 3 elements:
TimeDataRetrieval: The time needed for SQL Server to retrieve the data of all datasets in your report.
TimeProcessing: The number of milliseconds spent in the processing engine for the request.
TimeRendering: The time spent after the Rendering Object Model is exposed to the rendering extension.
After you click plus sign, Report rendering occurs after data and layout are combined into an interim format and then passed to a rendering extension. The time includes:
Time spent in renderer.
Time spent in width and height of report items.
Time spent in paging.
Time spent in all expression evaluation.
For more information about report processing, please refer to the following document:
http://blogs.msdn.com/b/robertbruckner/archive/2009/01/05/executionlog2-view.aspx
If you have any more questions, please feel free to ask.
Thanks,
Wendy Fu
Wendy Fu
TechNet Community Support -
Query in timesten taking more time than query in oracle database
Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE DATE,
IF_SYSTEM VARCHAR2(32) NOT NULL,
GROUPING_ID TT_BIGINT,
TIME_DIM_ID TT_INTEGER NOT NULL,
BUSINESS_DIM_ID TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID TT_INTEGER NOT NULL,
INSTR_DIM_ID TT_INTEGER NOT NULL,
EXECUTION_DIM_ID TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS TT_BIGINT,
FILLED_QUANTITY TT_BIGINT,
CNT_FILLED_QUANTITY TT_BIGINT,
QUANTITY TT_BIGINT,
CNT_QUANTITY TT_BIGINT,
COMMISSION BINARY_FLOAT,
CNT_COMMISSION TT_BIGINT,
FILLS_NUMBER TT_BIGINT,
CNT_FILLS_NUMBER TT_BIGINT,
AGGRESSIVE_FILLS TT_BIGINT,
CNT_AGGRESSIVE_FILLS TT_BIGINT,
NOTIONAL BINARY_FLOAT,
CNT_NOTIONAL TT_BIGINT,
TOTAL_PRICE BINARY_FLOAT,
CNT_TOTAL_PRICE TT_BIGINT,
CANCELLED_ORDERS_COUNT TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT TT_BIGINT,
ROUTED_ORDERS_NO TT_BIGINT,
CNT_ROUTED_ORDERS_NO TT_BIGINT,
ROUTED_LIQUIDITY_QTY TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY TT_BIGINT,
REMOVED_LIQUIDITY_QTY TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY TT_BIGINT,
ADDED_LIQUIDITY_QTY TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY TT_BIGINT,
AGENT_CHARGES BINARY_FLOAT,
CNT_AGENT_CHARGES TT_BIGINT,
CLEARING_CHARGES BINARY_FLOAT,
CNT_CLEARING_CHARGES TT_BIGINT,
EXECUTION_CHARGES BINARY_FLOAT,
CNT_EXECUTION_CHARGES TT_BIGINT,
TRANSACTION_CHARGES BINARY_FLOAT,
CNT_TRANSACTION_CHARGES TT_BIGINT,
ORDER_MANAGEMENT BINARY_FLOAT,
CNT_ORDER_MANAGEMENT TT_BIGINT,
SETTLEMENT_CHARGES BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES TT_BIGINT,
RECOVERED_AGENT BINARY_FLOAT,
CNT_RECOVERED_AGENT TT_BIGINT,
RECOVERED_CLEARING BINARY_FLOAT,
CNT_RECOVERED_CLEARING TT_BIGINT,
RECOVERED_EXECUTION BINARY_FLOAT,
CNT_RECOVERED_EXECUTION TT_BIGINT,
RECOVERED_TRANSACTION BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION TT_BIGINT,
RECOVERED_ORD_MGT BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT TT_BIGINT,
RECOVERED_SETTLEMENT BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT TT_BIGINT,
CLIENT_AGENT BINARY_FLOAT,
CNT_CLIENT_AGENT TT_BIGINT,
CLIENT_ORDER_MGT BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT TT_BIGINT,
CLIENT_EXEC BINARY_FLOAT,
CNT_CLIENT_EXEC TT_BIGINT,
CLIENT_TRANS BINARY_FLOAT,
CNT_CLIENT_TRANS TT_BIGINT,
CLIENT_CLEARING BINARY_FLOAT,
CNT_CLIENT_CLEARING TT_BIGINT,
CLIENT_SETTLE BINARY_FLOAT,
CNT_CLIENT_SETTLE TT_BIGINT,
CHARGEABLE_TAXES BINARY_FLOAT,
CNT_CHARGEABLE_TAXES TT_BIGINT,
VENDOR_CHARGE BINARY_FLOAT,
CNT_VENDOR_CHARGE TT_BIGINT,
ROUTING_CHARGES BINARY_FLOAT,
CNT_ROUTING_CHARGES TT_BIGINT,
RECOVERED_ROUTING BINARY_FLOAT,
CNT_RECOVERED_ROUTING TT_BIGINT,
CLIENT_ROUTING BINARY_FLOAT,
CNT_CLIENT_ROUTING TT_BIGINT,
TICKET_CHARGES BINARY_FLOAT,
CNT_TICKET_CHARGES TT_BIGINT,
RECOVERED_TICKET_CHARGES BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns ORACLE TimesTen
1 1.05 0.94
2 1.07 1.47
3 2.04 1.8
4 2.06 2.08
5 2.09 2.4
6 3.01 2.67
7 4.02 3.06
8 4.03 3.37
9 4.04 3.62
10 4.06 4.02
11 4.08 4.31
12 4.09 4.61
13 5.01 4.76
14 5.02 5.06
15 5.04 5.25
16 5.05 5.48
17 5.08 5.84
18 6 6.21
19 6.02 6.34
20 6.04 6.75 -
XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD
HI
XML Publisher(XDODTEXE) in EBS taking more time with the same SQL in TOAD.
The sql has 5 union clauses.
It takes 20-30 minutes in TOAD compared to running through Concurrent Program in XML Publisher in EBS taking around 4-5 hours.
The Scalable Flag at report level is turned on with the JVM options set to -Xmx1024m -Xmx1024m in Concurrent Program definition.
Other configurations for Data Template like XSLT, Scalable, Optimization are turned on though didn't bounce the OPP Server for these to take effect as I am not sure whether it is needed.
Thanks in advance for your help.But the question is that how come it is working in TOAD and takes only 15-20 minutes?
with initialization of session ?
what about sqlplus ?
Do I have to set up the the temp directory for the XML Publisher report to make it faster?
look at
R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues (Doc ID 1410160.1)
BI Publisher - Troubleshooting Oracle Business Intelligence (XML) Publisher For The Oracle E-Business Suite (Doc ID 364547.1) -
Snapshot Refresh taking More Time
Dear All,
We are facing a Snapshot refresh problem currently in Production Environment.
Oracle Version : Oracle8i Enterprise Edition Release 8.1.6.1.0
Currently we have created a Snapshot on a Join with 2 remote tables using Synonyms.
ex:
CREATE SNAPSHOT XYZ REFRESH COMPLETE WITH ROWID
AS
SELECT a.* FROM SYN1 a, SYN2 b
Where b.ACCT_NO=a.ACCT_NO;
We have created a Index on the above Snapshot XYZ.
Create index XYZ_IDX1 on XYZ (ACCT_NO);
a. The Explain plan of the above query shows the Index Scan on SYN1.
If we query above Select Statement,it hardly takes 2 seconds to exedute.
b. But the Complete Refresh of Snapshot XYZ is almost taking 20 Mins for just truncating and inserting 500 records and is generating huge Disk Reads as SYN1 in remote table consists of 32 Million records whereas SYN2 contains only 500 Records.
If we truncate and insert inot a table as performed by the Complete refresh of Snapshot,it hardly takes 4 Seconds to refresh the table.
Please let me know what might be the possible reasons for the Complete refresh of Snapshot taking more time.Dear All,
While refreshing the Snapshot XYZ,I could find the following.
a. Sort/Merge operation was performed while inserting the data into Snapshot.
INSERT /*+ APPEND */ INTO "XYZ"
SELECT a.* FROM SYN1 a, SYN2 b Where b.ACCT_NO=a.ACCT_NO;
The above operation performed huge disk reads.
b. By Changing the session parameter sort_area_size ,the time decreased by 50% but still the disk reads are huge.
I would like to know why Sort/Merge Operation is performed in the above Insert?
Edited by: Prashanth Deshmukh on Mar 13, 2009 10:54 AM
Edited by: Prashanth Deshmukh on Mar 13, 2009 10:55 AM
Maybe you are looking for
-
Post Author: thecoffeemachine CA Forum: .NET I already posted this message in other Web sites, but I am almost getting crazy here and I need help: HI: The Web application I am testing was having several issues related to loading Crystal Reports. It w
-
PDF Attachments in Exchange emails opened in iPad
Would like to know how to consistently make a PDF attachement in an Exchange email shown as an icon; not embedded text in the message body. We use iPad to open PDF attachments in emails sent from multiple employees. These attachments are all origin
-
Can I save a filled out form in adobe reader
How do I save a filled out form in adobe reader?
-
How to execute the report 6i more faster?
Hi guys, Im doing a report in Oracle 6i and it has a lot of Column Formula used. The problem was i want to execute my report faster. Thank you. Regards, Lala
-
Hoping someone can recognize this issue. I'll be succinct. Dual 2 G5 PPC with Radeon X800XT Card and Matrox MXO. Recently, the displays have started acting "funny" not in a ha-ha way. When I cursor from one display to the next ther is a "shift" like