Check Data Values, Generates Navigation Data steps taking more time
Hi,
In the Masterdata load to 0MAT_PLANT
The step - "Updating attributes for InfoObject 0MAT_PLANT" used to take a Maximum if 23 minutes for a data package size of 100,000 records till three days ago with the constituent steps "Check Data Values","Process Time-Independent Attributes" and "Generates Navigation Data" taking 5,4 and 12 minutes respectively.
For the last three days the step "Updating attributes for InfoObject 0MAT_PLANT" is taking 58-60 minutes for the same for data package size of 100,000 records with the constituent steps "Check Data Values","Process Time-Independent Attributes" and "Generates Navigation Data" taking 12,14 and 22 minutes respectively.
Can anyone please explain what is needed to speeden the dataload back to its previous timing e.g. if tables indexes need to built for 'P' or 'SID' table of Mat_plant.
An explanation of why the delay could be happening in the above steps will be really great.
regards,
Kulmohan
Hi Kulmohan Singh,
I think there may be a difference between the first update and second update is the 100,000 records is
already in your 0MAT_PLANT.
If you update the 0MAT_PLANT in twice, The Modify statement should loop the Q table of
0MAT_PLANT that was already have the 100,000 records in it,so the upload speed was slowly.
Hope it helpful you.
Regards,
James.
Similar Messages
-
CDP Performance Issue-- Taking more time fetch data
Hi,
I'm working on Stellent 7.5.1.
For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
public void getManager(final HashMap binderMap)
throws VistaInvalidInputException, VistaDataNotFoundException,
DataException, ServiceException, VistaTemplateException
String collectionID =
getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
long firstStartTime = System.currentTimeMillis();
HashMap resultSetMap = null;
String isNonRecursive = getStringLocal(VistaFolderConstants
.ISNONRECURSIVE_KEY);
if (isNonRecursive != null
&& isNonRecursive.equalsIgnoreCase(
VistaContentFetchHelperConstants.STRING_TRUE))
VistaLibraryContentFetchManager libraryContentFetchManager =
new VistaLibraryContentFetchManager(
binderMap);
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
resultSetMap = libraryContentFetchManager
.getFolderContentItems(m_workspace);
* used to add the resultset to the binder.
addResultSetToBinder(resultSetMap , true);
else
long startTime = System.currentTimeMillis();
* isStandard is used to decide the call is for Standard or
* Extended.
SystemUtils.trace(
VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
"The input Parameters for Content Fetch = "
+ binderMap);
String isStandard = getTemplateInformation(binderMap);
long endTimeTemplate = System.currentTimeMillis();
binderMap.put(VistaFolderConstants.IS_STANDARD,
isStandard);
long endTimebinderMap = System.currentTimeMillis();
VistaContentFetchManager contentFetchManager
= new VistaContentFetchManager(binderMap);
long endTimeFetchManager = System.currentTimeMillis();
resultSetMap = contentFetchManager
.getAllFolderContentItems(m_workspace);
long endTimeresultSetMap = System.currentTimeMillis();
* used to add the resultset and the total no of content items
* to the binder.
addResultSetToBinder(resultSetMap , false);
long endTime = System.currentTimeMillis();
if (perfLogEnable.equalsIgnoreCase("true"))
Log.info("Time taken to execute " +
"getTemplateInformation=" +
(endTimeTemplate - startTime)+
"ms binderMap=" +
(endTimebinderMap - startTime)+
"ms contentFetchManager=" +
(endTimeFetchManager - startTime)+
"ms resultSetMap=" +
(endTimeresultSetMap - startTime) +
"ms getManager:getAllFolderContentItems = " +
(endTime - startTime) +
"ms overallTime=" +
(endTime - firstStartTime) +
"ms folderID =" +
collectionID);
Edited by: 838623 on Feb 22, 2011 1:43 AMHi.
The Select statment accessing MSEG Table is Slow Many a times.
To Improve the performance of MSEG.
1.Check for the proper notes in the Service Market Place if you are working for CIN version.
2.Index the MSEG table
2.Check and limit the Columns in the Select statment .
Possible Way.
SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
FROM MSEG
INTO CORRESPONDING FIELDS OF TABLE ITAB
WHERE WERKS EQ P_WERKS AND
MBLNR IN S_MBLNR AND
BWART EQ '105' .
Delete itab where itab EQ '5002361303'
Delete itab where itab EQ '5003501080'
Delete itab where itab EQ '5002996300'
Delete itab where itab EQ '5002996407'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003587026'
Delete itab where itab EQ '5003493186'
Delete itab where itab EQ '5002720583'
Delete itab where itab EQ '5002928122'
Delete itab where itab EQ '5002628263'.
Select
Regards
Bala.M
Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM -
Deletion of Data Mart Request taking more time
Hi all,
One of the Data Mart process(ODS to Infocube) has failed.
Im not able to delete the request in Infocube.
When delete option executed, its taking more time while im monitoring that job in SM37 and its not getting completed.
The details seen in the Job Log is as shown below,
Job started
Step 001 started (program RSDELPART1, variant &0000000006553, user ID SSRIN90)
Delete is running: Data target BUS_CONT, from 376,447 to 376,447
Please let me know your suggestions.
Thanks,
SowrabhHi,
How many records are there in that request. Usually when you try to delete out a request it takes more time. Depending on the data volume in the request the deletion time will vary.
Give it some more time and see if it finishes. To actually know if the job is doing anything, goto SM50/51 and look at whats happening.
Cheers,
Kedar -
How to extract the date value of IBOR date="12/12/2009"
I have the following query, but do not get the date value out:
WITH ibors AS (
SELECT xmltype('<?xml version="1.0" encoding="utf-8"?>
<IBOR date="12/12/2009">
<LIBOR currency="USD">
<OneYear>1.38875</OneYear>
</LIBOR>
</IBOR>
') ibor_xml
FROM dual
SELECT i.ibor_date, i.ibor_oneyear
FROM ibors,
XMLTABLE(
'//IBOR'
PASSING ibors.ibor_xml
COLUMNS ibor_date VARCHAR2(20) PATH '/IBOR/date',
ibor_oneyear VARCHAR2(20) PATH '/IBOR/LIBOR/OneYear'
) i;
How to extract the date value of <IBOR date="12/12/2009">?Hi,
The date is an attribute of element IBOR. So you must use "@date" in the xpath expression :
WITH ibors AS (
SELECT xmltype('<?xml version="1.0" encoding="utf-8"?>
<IBOR date="12/12/2009">
<LIBOR currency="USD">
<OneYear>1.38875</OneYear>
</LIBOR>
</IBOR>
') ibor_xml
FROM dual
SELECT i.ibor_date, i.ibor_oneyear
FROM ibors,
XMLTABLE(
'//IBOR'
PASSING ibors.ibor_xml
COLUMNS
ibor_date VARCHAR2(20) PATH '/IBOR/@date',
ibor_oneyear VARCHAR2(20) PATH '/IBOR/LIBOR/OneYear'
) i; -
Diff b/w Data Mart & Generate Export data source
HI All
Here i want one small clarification.
will you plze help me regarding of the following
I want to know the Difference between Data Mart & Generate Export data source.
In my point of you both are little bit same.
any way some budy explain with full details.
If you provide any help doc its too help ful for me.
Regards n Thanks
kgbHi kgbm,
This my point of view:
1)DataMart: in BI you use this term for identify a specific area of analysis in your DataWarehouse. Generally it is obtained with an aggregation of data of other structures, or combining internal data with external data.
2)Generate Export D.S.: it is the technical tool to obtain an extractor to feed other structures, called DataMart.
Ciao.
Riccardo. -
Taking more time for retreving data from nested table
Hi
we have two databases db1 and db2,in database db2 we have number of nested tables were there.
Now the problem is we had link between two databases,whenever u firing the any query in db1 internally it's going to acces nested tables in db2.
For feching records it's going to take much more time even records are less in table . what would be the reason.
plz help me daliy we are facing the problem regarding the samePlease avoid duplicate thread :
quaries taking more time
Nicolas.
+< mod. action : thread locked>+ -
Hello Experts
As Iam new to SDN I tried to find an answer to this issue. Please find below is the Issue
There is a process chain. In this PC, After deleting the Index step, there is a DTP step, which updates the Cube with one DSO; the first DTP step is much delayed for update to cube. But where as the immediate DTP step to the same cube with another DSO is finishing so faster.
The prob is with the first DTP step which updates the cube frm DSO In this there is one Job with the namingconvention shown as BIDTPR_545787_1 was running more than 10000sec to update the Cube. Dont know why this job is running beyond so much time, where as it usually prior to some days back completes within 3000 secs.
Now the prob. is that the Data loading performance of the Prod system effected, where this DTP step is ruuning beyond time. Please can someone suggest how to find and resolve this issue as not sure why the below kind of JOBS are running for more time causing the PC much delayed for completion
BIDTPR_545787_1
BIDTPR_545689_1 etc of the First DTP are running longer. Pls help!
Regards
MDHi
There could have been many reasons like
during that particular DTP is scheduled,there will not be neough work processors,this is due to the some other long running loads/jobs are running and which would have been occupied.hence during this DTP runs,parallely crosscheck in SM12,SM50&SM51.
And also check that particular job in SM37 and understand where does its taking more time.
There could have been expensive select statement,if any routines used in the transformations.
Try to find out this.
Regards
Jagadish -
Create Index Step taking long time
I have a create index step in process chain for a infocube. The Create Index step is takes more time. The requests in this cube are rolled up and compressed. The batch process are also avaliable. But still the create index step takes more time. Any suggestions to reduce the time of create index step?
Hi,
If you have more Data in Cube then it will take some time..check any Dumps in ST22 and SM21. Ask Basis Team any Error messages in DB02 and check.
Else goto to RSRV
Tests in Transaction RSRV --> Database --> Database Indices of an InfoCube and Its Aggregates
Give Cube name and execute it and see the errors then if any RED color, ckick on Repaire Icon and see.
Thanks
Reddy -
Query in timesten taking more time than query in oracle database
Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE DATE,
IF_SYSTEM VARCHAR2(32) NOT NULL,
GROUPING_ID TT_BIGINT,
TIME_DIM_ID TT_INTEGER NOT NULL,
BUSINESS_DIM_ID TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID TT_INTEGER NOT NULL,
INSTR_DIM_ID TT_INTEGER NOT NULL,
EXECUTION_DIM_ID TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS TT_BIGINT,
FILLED_QUANTITY TT_BIGINT,
CNT_FILLED_QUANTITY TT_BIGINT,
QUANTITY TT_BIGINT,
CNT_QUANTITY TT_BIGINT,
COMMISSION BINARY_FLOAT,
CNT_COMMISSION TT_BIGINT,
FILLS_NUMBER TT_BIGINT,
CNT_FILLS_NUMBER TT_BIGINT,
AGGRESSIVE_FILLS TT_BIGINT,
CNT_AGGRESSIVE_FILLS TT_BIGINT,
NOTIONAL BINARY_FLOAT,
CNT_NOTIONAL TT_BIGINT,
TOTAL_PRICE BINARY_FLOAT,
CNT_TOTAL_PRICE TT_BIGINT,
CANCELLED_ORDERS_COUNT TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT TT_BIGINT,
ROUTED_ORDERS_NO TT_BIGINT,
CNT_ROUTED_ORDERS_NO TT_BIGINT,
ROUTED_LIQUIDITY_QTY TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY TT_BIGINT,
REMOVED_LIQUIDITY_QTY TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY TT_BIGINT,
ADDED_LIQUIDITY_QTY TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY TT_BIGINT,
AGENT_CHARGES BINARY_FLOAT,
CNT_AGENT_CHARGES TT_BIGINT,
CLEARING_CHARGES BINARY_FLOAT,
CNT_CLEARING_CHARGES TT_BIGINT,
EXECUTION_CHARGES BINARY_FLOAT,
CNT_EXECUTION_CHARGES TT_BIGINT,
TRANSACTION_CHARGES BINARY_FLOAT,
CNT_TRANSACTION_CHARGES TT_BIGINT,
ORDER_MANAGEMENT BINARY_FLOAT,
CNT_ORDER_MANAGEMENT TT_BIGINT,
SETTLEMENT_CHARGES BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES TT_BIGINT,
RECOVERED_AGENT BINARY_FLOAT,
CNT_RECOVERED_AGENT TT_BIGINT,
RECOVERED_CLEARING BINARY_FLOAT,
CNT_RECOVERED_CLEARING TT_BIGINT,
RECOVERED_EXECUTION BINARY_FLOAT,
CNT_RECOVERED_EXECUTION TT_BIGINT,
RECOVERED_TRANSACTION BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION TT_BIGINT,
RECOVERED_ORD_MGT BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT TT_BIGINT,
RECOVERED_SETTLEMENT BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT TT_BIGINT,
CLIENT_AGENT BINARY_FLOAT,
CNT_CLIENT_AGENT TT_BIGINT,
CLIENT_ORDER_MGT BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT TT_BIGINT,
CLIENT_EXEC BINARY_FLOAT,
CNT_CLIENT_EXEC TT_BIGINT,
CLIENT_TRANS BINARY_FLOAT,
CNT_CLIENT_TRANS TT_BIGINT,
CLIENT_CLEARING BINARY_FLOAT,
CNT_CLIENT_CLEARING TT_BIGINT,
CLIENT_SETTLE BINARY_FLOAT,
CNT_CLIENT_SETTLE TT_BIGINT,
CHARGEABLE_TAXES BINARY_FLOAT,
CNT_CHARGEABLE_TAXES TT_BIGINT,
VENDOR_CHARGE BINARY_FLOAT,
CNT_VENDOR_CHARGE TT_BIGINT,
ROUTING_CHARGES BINARY_FLOAT,
CNT_ROUTING_CHARGES TT_BIGINT,
RECOVERED_ROUTING BINARY_FLOAT,
CNT_RECOVERED_ROUTING TT_BIGINT,
CLIENT_ROUTING BINARY_FLOAT,
CNT_CLIENT_ROUTING TT_BIGINT,
TICKET_CHARGES BINARY_FLOAT,
CNT_TICKET_CHARGES TT_BIGINT,
RECOVERED_TICKET_CHARGES BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns ORACLE TimesTen
1 1.05 0.94
2 1.07 1.47
3 2.04 1.8
4 2.06 2.08
5 2.09 2.4
6 3.01 2.67
7 4.02 3.06
8 4.03 3.37
9 4.04 3.62
10 4.06 4.02
11 4.08 4.31
12 4.09 4.61
13 5.01 4.76
14 5.02 5.06
15 5.04 5.25
16 5.05 5.48
17 5.08 5.84
18 6 6.21
19 6.02 6.34
20 6.04 6.75 -
Taking More Time while inserting into the table (With foriegn key)
Hi All,
I am facing problem while inserting the values into the master table.
The problem,
Table A -- User Master Table (Reg No, Name, etc)
Table B -- Transaction Table (Foreign key reference with Table A).
While inserting the data's in Table B, i need to insert the reg no also in table B which is mandatory. I followed the logic which is mentioned in the SRDemo.
While inserting we need to query the Table A first to have the values in TableABean.java.
final TableA tableA= (TableA )uow.executeQuery("findUser",TableA .class, regNo);
Then, we need to create the instance for TableB
TableB tableB= (TableB)uow.newInstance(TableB.class);
tableB.setID(bean.getID);
tableA.addTableB(tableB); --- this is for to insert the regNo of TableA in TableB.. This line is executing the query "select * from TableB where RegNo = <tableA.getRegNo>".
This query is taking too much time if values are more in the TableB for that particular registrationNo. Because of this its taking more time to insert into the TableB.
For Ex: TableA -- regNo : 101...having less entry in TableB means...inserting record is taking less than 1 sec
regNo : 102...having more entry in TableB means...inserting record is taking more than 2 sec
Time delay is there for different users when they enter transaction in TableB.
I need to avoid this since in future it will take more time...from 2 sec to 10 sec, if volume of data increases mean.
Please help me to resolve this issue...I am facing it now in production.
Thanks & Regards
VBHello,
Looks like you have a 1:M relationship from TableA to TableB, with a 1:1 back pointer from TableB to TableA. If triggering the 1:M relationship is causing you delays that you want to avoid there might be two quick ways I can see:
1) Don't map it. Leave the TableA->TableB 1:M unmapped, and instead just query for relationship when you do need it. This means you do not need to call tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA), so that the TableB->TableA relation gets set. Might not be the best option, but it depends on your application's usage. It does allow you to potentially page the TableB results or add other query query performance options when you do need the data though.
2) You are currently using Lazy loading for the TableA->TableB relationship - if it is untriggered, don't bother calling tableA.addTableB(tableB), and instead only need to call tableB.setTableA(tableA). This of course requires using TopLink api to a) verify the collection is an IndirectCollection type, and b) that it is hasn't been triggered. If it has been triggered, you will still need to call tableA.addTableB(tableB), but it won't result in a query. Check out the oracle.toplink.indirection.IndirectContainer class and it's isInstantiated() method. This can cause problems though in highly concurrent environments, as other threads may have triggered the indirection before you commit your transaction, so that the A->B collection is not up to date - this might require refreshing the TableA if so.
Change tracking would probably be the best option to use here, and is described in the EclipseLink wiki:
http://wiki.eclipse.org/Introduction_to_EclipseLink_Transactions_(ELUG)#Attribute_Change_Tracking_Policy
Best Regards,
Chris -
Query is taking more time to execute in PROD
Hi All,
Can anyone tell me why this query is taking more time when I am using for single trx_number record it is working fine but when I am trying to use all the records it is not fatching any records and it is keep on running.
SELECT DISTINCT OOH.HEADER_ID
,OOH.ORG_ID
,ct.CUSTOMER_TRX_ID
,ool.ship_from_org_id
,ct.trx_number IDP_SHIPMENT_ID
,ctt.type STATUS_CODE
,SYSDATE STATUS_DATE
,ooh.attribute16 IDP_ORDER_NBR --Change based on testing on 21-JUL-2010 in UAT
,lpad(rac_bill.account_number,6,0) IDP_BILL_TO_CUSTOMER_NBR
,rac_bill.orig_system_reference
,rac_ship_party.party_name SHIP_TO_NAME
,raa_ship_loc.address1 SHIP_TO_ADDR1
,raa_ship_loc.address2 SHIP_TO_ADDR2
,raa_ship_loc.address3 SHIP_TO_ADDR3
,raa_ship_loc.address4 SHIP_TO_ADDR4
,raa_ship_loc.city SHIP_TO_CITY
,NVL(raa_ship_loc.state,raa_ship_loc.province) SHIP_TO_STATE
,raa_ship_loc.country SHIP_TO_COUNTRY_NAME
,raa_ship_loc.postal_code SHIP_TO_ZIP
,ooh.CUST_PO_NUMBER CUSTOMER_ORDER_NBR
,ooh.creation_date CUSTOMER_ORDER_DATE
,ool.actual_shipment_date DATE_SHIPPED
,DECODE(mp.organization_code,'CHP', 'CHESAPEAKE'
,'CSB', 'CHESAPEAKE'
,'DEP', 'CHESAPEAKE'
,'CHESAPEAKE') SHIPPED_FROM_LOCATION --'MEMPHIS' --'HOUSTON'
,ooh.freight_carrier_code FREIGHT_CARRIER
,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)
+ NVL(XX_FSG_NA_FASTRAQ_IFACE.get_line_fr_amt ('FREIGHT',ct.customer_trx_id,ct.org_id),0)FREIGHT_CHARGE
,ooh.freight_terms_code FREIGHT_TERMS
,'' IDP_BILL_OF_LADING
,(SELECT WAYBILL
FROM WSH_DELIVERY_DETAILS_OE_V
WHERE -1=-1
AND SOURCE_HEADER_ID = ooh.header_id
AND SOURCE_LINE_ID = ool.line_id
AND ROWNUM =1) WAYBILL_CARRIER
,'' CONTAINERS
,ct.trx_number INVOICE_NBR
,ct.trx_date INVOICE_DATE
,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('LINE',ct.customer_trx_id,ct.org_id),0) +
NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) +
NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('FREIGHT',ct.customer_trx_id,ct.org_id),0)INVOICE_AMOUNT
,NULL IDP_TAX_IDENTIFICATION_NBR
,NVL(XX_FSG_NA_FASTRAQ_IFACE.get_invoice_amount ('TAX',ct.customer_trx_id,ct.org_id),0) TAX_AMOUNT_1
,NULL TAX_DESC_1
,NULL TAX_AMOUNT_2
,NULL TAX_DESC_2
,rt.name PAYMENT_TERMS
,NULL RELATED_INVOICE_NBR
,'Y' INVOICE_PRINT_FLAG
FROM ra_customer_trx_all ct
,ra_cust_trx_types_all ctt
,hz_cust_accounts rac_ship
,hz_cust_accounts rac_bill
,hz_parties rac_ship_party
,hz_locations raa_ship_loc
,hz_party_sites raa_ship_ps
,hz_cust_acct_sites_all raa_ship
,hz_cust_site_uses_all su_ship
,ra_customer_trx_lines_all rctl
,oe_order_lines_all ool
,oe_order_headers_all ooh
,mtl_parameters mp
,ra_terms rt
,OE_ORDER_SOURCES oos
,XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V
WHERE ct.cust_trx_type_id = ctt.cust_trx_type_id
AND ctt.TYPE <> 'BR'
AND ct.org_id = ctt.org_id
AND ct.ship_to_customer_id = rac_ship.cust_account_id
AND ct.bill_to_customer_id = rac_bill.cust_account_id
AND rac_ship.party_id = rac_ship_party.party_id
AND su_ship.cust_acct_site_id = raa_ship.cust_acct_site_id
AND raa_ship.party_site_id = raa_ship_ps.party_site_id
AND raa_ship_loc.location_id = raa_ship_ps.location_id
AND ct.ship_to_site_use_id = su_ship.site_use_id
AND su_ship.org_id = ct.org_id
AND raa_ship.org_id = ct.org_id
AND ct.customer_trx_id = rctl.customer_trx_id
AND ct.org_id = rctl.org_id
AND rctl.interface_line_attribute6 = to_char(ool.line_id)
AND rctl.org_id = ool.org_id
AND ool.header_id = ooh.header_id
AND ool.org_id = ooh.org_id
AND mp.organization_id = ool.ship_from_org_id
AND ooh.payment_term_id = rt.term_id
AND xla_ael_sl_v.last_update_date >= NVL(p_last_update_date,xla_ael_sl_v.last_update_date)
AND ooh.order_source_id = oos.order_source_id --Change based on testing on 19-May-2010
AND oos.name = 'FASTRAQ' --Change based on testing on 19-May-2010
AND ooh.org_id = g_org_id --Change based on testing on 19-May-2010
AND ool.flow_status_code = 'CLOSED'
AND xla_ael_sl_v.trx_hdr_id = ct.customer_trx_id
AND trx_hdr_table = 'CT'
AND xla_ael_sl_v.gl_transfer_status = 'Y'
AND xla_ael_sl_v.accounted_dr IS NOT NULL
AND xla_ael_sl_v.org_id = ct.org_id;
-- AND ct.trx_number = '2000080';
}Hello Friend,
You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
Please check the possbility of finding it through other AR tables.
2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
For example : For ra_cust_trx_types_all use ra_cust_trx_types.
This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
You can also check the same using
SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
If the tables are not analyzed , the CBO will not be able to tune your query.
4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
report.
Thanks,
Neeraj Shrivastava
[email protected]
Edited by: user9352949 on Oct 1, 2010 8:02 PM
Edited by: user9352949 on Oct 1, 2010 8:03 PM -
Essbase Calculation Script taking more time in new environment
Hi Everyone:
We have four environments in our implementation.
1. DEV Environment - 64 bit Essbase Version 11.1.1.3
2. PreProd Environment - 32 bit Essbase Version 9.3.0
3. PreProd Environment - 64 bit Essbase Version 11.1.1.3
In the above mentioned environment PreProd Environment - 64 bit Essbase Version 11.1.1.3 is a newly installed
environment.
We have migrated our Application from PreProd Environment - 32 bit Essbase Version 9.3.0 to PreProd Environment - 64 bit Essbase Version 11.1.1.3. A calculation script that takes only 20 minutes in 32 bit PreProd is taking more than
5 and half hours in newly installed 64 bit PreProd.
We have also migrated our Application from DEV Environment - 64 bit Essbase Version 11.1.1.3 to PreProd Environment - 64 bit Essbase Version 11.1.1.3. The calculation script that takes only 20 minutes in 64 bit Dev is taking more than 5 and half hours in newly installed 64 bit PreProd.
All the server settings and cache setting everything looks similar in all the three environments.
Please advice us what are all the possibilities that creates the issue.
Thanks and Regards,
Prabhakar.Hi Cameron,
Thanks for your reply.
I have cross checked the Virtual memory in both servers,in new server it was declared high.
Please find the cfg setting which we are using in our application.
AGENTPORT 1423
SERVERPORTBEGIN 32768
SERVERPORTEND 33768
AGENTDESC hypservice_1
;CSSREFRESHLEVEL auto
;SHAREDSERVICESREFRESHINTERVAL 30
CALCCACHEHIGH 199999999
CALCCACHEDEFAULT 150000000
CALCCACHELOW 10000000
CALCLOCKBLOCKDEFAULT 3000
DATAERRORLIMIT 10000
UPDATECALC FALSE
EXCEPTIONLOGOVERWRITE FALSE
CALCREUSEDYNCALCBLOCKS FALSE
PORTUSAGELOGINTERVAL 15
QRYGOVEXECTIME 600
LOGMESSAGELEVEL INFO
CALCPARALLEL 6
MAXLOGINS 100000
AGENTDELAY 100
AGENTTHREADS 30
AGTSVRCONNECTIONS 10
SERVERTHREADS 25
EXPORTTHREADS 1
SSLOGUNKNOWN FALSE
CALCNOTICEDEFAULT 10
NETRETRYCOUNT 3000
NETDELAY 2000
__SM__BUFFERED_IO TRUE
__SM__WAITED_IO TRUE
and aslo find the caches that we define:
Index cache:250000
Data Cache:250000
Data file cache:32768
The all above settings are identical both servers.
In New server ,only one script that is taking more time but remaining scripts are working fine with less time.
We also did one test cause that splitting the script in to multiple and executed ,in this cause the script where we are using direct assigning value from member(say A1) to another member(Say A2) is taking more time.But same scripts we executed in old server it executes fine.
Still we are not able to find out exact root cause for this issue.
Could please anyone help me to resove this issue.
Regards,
Prabhakar. -
Snapshot Refresh taking More Time
Dear All,
We are facing a Snapshot refresh problem currently in Production Environment.
Oracle Version : Oracle8i Enterprise Edition Release 8.1.6.1.0
Currently we have created a Snapshot on a Join with 2 remote tables using Synonyms.
ex:
CREATE SNAPSHOT XYZ REFRESH COMPLETE WITH ROWID
AS
SELECT a.* FROM SYN1 a, SYN2 b
Where b.ACCT_NO=a.ACCT_NO;
We have created a Index on the above Snapshot XYZ.
Create index XYZ_IDX1 on XYZ (ACCT_NO);
a. The Explain plan of the above query shows the Index Scan on SYN1.
If we query above Select Statement,it hardly takes 2 seconds to exedute.
b. But the Complete Refresh of Snapshot XYZ is almost taking 20 Mins for just truncating and inserting 500 records and is generating huge Disk Reads as SYN1 in remote table consists of 32 Million records whereas SYN2 contains only 500 Records.
If we truncate and insert inot a table as performed by the Complete refresh of Snapshot,it hardly takes 4 Seconds to refresh the table.
Please let me know what might be the possible reasons for the Complete refresh of Snapshot taking more time.Dear All,
While refreshing the Snapshot XYZ,I could find the following.
a. Sort/Merge operation was performed while inserting the data into Snapshot.
INSERT /*+ APPEND */ INTO "XYZ"
SELECT a.* FROM SYN1 a, SYN2 b Where b.ACCT_NO=a.ACCT_NO;
The above operation performed huge disk reads.
b. By Changing the session parameter sort_area_size ,the time decreased by 50% but still the disk reads are huge.
I would like to know why Sort/Merge Operation is performed in the above Insert?
Edited by: Prashanth Deshmukh on Mar 13, 2009 10:54 AM
Edited by: Prashanth Deshmukh on Mar 13, 2009 10:55 AM -
Oracle 11g: Oracle insert/update operation is taking more time.
Hello All,
In Oracle 11g (Windows 2008 32 bit environment) we are facing following issue.
1) We are inserting/updating data on some tables (4-5 tables and we are firing query with very high rate).
2) After sometime (say 15 days with same load) we are feeling that the Oracle operation (insert/update) is taking more time.
Query1: How to find actually oracle is taking more time in insert/updates operation.
Query2: How to rectify the problem.
We are having multithread environment.
Thanks
With Regards
Hemant.Liron Amitzi wrote:
Hi Nicolas,
Just a short explanation:
If you have a table with 1 column (let's say a number). The table is empty and you have an index on the column.
When you insert a row, the value of the column will be inserted to the index. To insert 1 value to an index with 10 values in it will be fast. It will take longer to insert 1 value to an index with 1 million values in it.
My second example was if I take the same table and let's say I insert 10 rows and delete the previous 10 from the table. I always have 10 rows in the table so the index should be small. But this is not correct. If I insert values 1-10 and then delete 1-10 and insert 11-20, then delete 11-20 and insert 21-30 and so on, because the index is sorted, where 1-10 were stored I'll now have empty spots. Oracle will not fill them up. So the index will become larger and larger as I insert more rows (even though I delete the old ones).
The solution here is simply revuild the index once in a while.
Hope it is clear.
Liron Amitzi
Senior DBA consultant
[www.dbsnaps.com]
[www.orbiumsoftware.com]Hmmm, index space not reused ? Index rebuild once a while ? That was what I understood from your previous post, but nothing is less sure.
This is a misconception of how indexes are working.
I would suggest the reading of the following interasting doc, they are a lot of nice examples (including index space reuse) to understand, and in conclusion :
http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf
"+Index Rebuild Summary+
+•*The vast majority of indexes do not require rebuilding*+
+•Oracle B-tree indexes can become “unbalanced” and need to be rebuilt is a myth+
+•*Deleted space in an index is “deadwood” and over time requires the index to be rebuilt is a myth*+
+•If an index reaches “x” number of levels, it becomes inefficient and requires the index to be rebuilt is a myth+
+•If an index has a poor clustering factor, the index needs to be rebuilt is a myth+
+•To improve performance, indexes need to be regularly rebuilt is a myth+"
Good reading,
Nicolas. -
Taking more time in INDEX RANGE SCAN compare to the full table scan
Hi all ,
Below are the version og my database.
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for HPUX: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
I have gather the table statistics and plan change for sql statment.
SELECT P1.COMPANY, P1.PAYGROUP, P1.PAY_END_DT, P1.PAYCHECK_OPTION,
P1.OFF_CYCLE, P1.PAGE_NUM, P1.LINE_NUM, P1.SEPCHK FROM PS_PAY_CHECK P1
WHERE P1.FORM_ID = :1 AND P1.PAYCHECK_NBR = :2 AND
P1.CHECK_DT = :3 AND P1.PAYCHECK_OPTION <> 'R'
Plan before the gather stats.
Plan hash value: 3872726522
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | *14306* (100)| |
| 1 | *TABLE ACCESS FULL| PS_PAY_CHECK* | 1 | 51 | 14306 (4)| 00:02:52 |
Plan after the gather stats:
Operation Object Name Rows Bytes Cost
SELECT STATEMENT Optimizer Mode=CHOOSE
1 4
*TABLE ACCESS BY INDEX ROWID SYSADM.PS_PAY_CHECK* 1 51 *4*
*INDEX RANGE SCAN SYSADM.PS0PAY_CHECK* 1 3After gather stats paln look good . but when i am exeuting the query it take 5 hours. before the gather stats it finishing the within 2 hours. i do not want to restore my old statistics. below are the data for the tables.and when i am obserrving it lot of db files scatter rea
NAME TYPE VALUE
_optimizer_cost_based_transformation string OFF
filesystemio_options string asynch
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string choose
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
SQL> select count(*) from sysadm.ps_pay_check;
select num_rows,blocks from dba_tables where table_name ='PS_PAY_CHECK';
COUNT(*)
1270052
SQL> SQL> SQL>
NUM_ROWS BLOCKS
1270047 63166
Event Waits Time (s) (ms) Time Wait Class
db file sequential read 1,584,677 6,375 4 36.6 User I/O
db file scattered read 2,366,398 5,689 2 32.7 User I/Oplease let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?suresh.ratnaji wrote:
NAME TYPE VALUE
_optimizer_cost_based_transformation string OFF
filesystemio_options string asynch
object_cache_optimal_size integer 102400
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string choose
optimizer_secure_view_merging boolean TRUE
plsql_optimize_level integer 2
please let me know why it taking more time in INDEX RANGE SCAN compare to the full table scan?Suresh,
Any particular reason why you have a non-default value for a hidden parameter, optimizercost_based_transformation ?
On my 10.2.0.1 database, its default value is "linear". What happens when you reset the value of the hidden parameter to default?
Maybe you are looking for
-
The TV series, "Turn" has partially downloaded. I paused the download process in order to rent the movie "lego story" I cannot find the movie, but I believe I have been charged for it. In my library, I have episodes 101, 102, 9, & 11-16 of the "Turn"
-
Resizing subtitles for Flash CS4
Hi, Does anyone have any useful code for resizing subtitles when you go to fullscreen? At the moment I have the following code, but when I go to full screen it resizes correctly on the first click, then if you go back to normal screen size, and then
-
ERROR MESSAGE WHEN DISPLAYING LARGE RETRIEVING AND DISPLAYING LARGE AMOUNT OF DATA
Hello, Am querying my database(mysql) and displaying my data in a DataGrid (Note that am using Flex 2.0) It works fine when the amount of data populating the grid is not much. But when I have large amount of data I get the following error message and
-
[EA4500] Multiple Devices with Open NAT?
Hello! I have a PS3, PS4, Xbox 360, and Xbox One. The problem Im having and that Ive been having is keeping their NAT status to open. They are all assigned static IP addresses (which I made). What I do now is that everytime I want to use one of the d
-
I am having problems catching an exception I made
I made an exception, but my program won't catch it. import java.io.*; import java.util.Scanner; public class Driver { public static void main(String [] args){ String input; InputStream yourfile; Scanner file;