Truncate table taking too much time
hi guys,
thnks in advince.......
oracle version is =9.2.0.5.0
os/ version =SunOS Ganesha1 5.9 Generic_122300-05 sun4u sparc SUNW,Sun-Fire-V890
application people-soft
version=8.4
every thing is running fine from last week .
whenever process executed like billing ,d_dairy.
it selected some temporary tables and start truncate tables it takes 5 mint to 8 mint even table has 0 rows.
if more then one users executed process but diff process it comes in lock ..
regs
deep..
Hi,
Here iam sending details of technical settings.
Name ACCTIT Transparent Table
Short text Compressed Data from FI/CO Document
Last Change SAP 10.02.2005
Status Active Saved
Data class APPL1 Transaction data, transparent tables
Size category 4 Data records expected: 24,000 to 89,000
Thanku
Similar Messages
-
ACCTIT table Taking too much time
Hi,
In SE16: ACCTIT table i gave the G/L account no after that i executed in my production its taking too much time for to show the result.
ThankuHi,
Here iam sending details of technical settings.
Name ACCTIT Transparent Table
Short text Compressed Data from FI/CO Document
Last Change SAP 10.02.2005
Status Active Saved
Data class APPL1 Transaction data, transparent tables
Size category 4 Data records expected: 24,000 to 89,000
Thanku -
Hi all
I'm quite new to database administration.my problem is that i'm trying to import dump file but one of the table taking too much time to import .
Description::
1 Export taken from source database which is in oracle 8i character set is WE8ISO8859P1
2 I am taking import in 10 g with character set utf 8 and national character set is also same.
3 dump file is about 1.5 gb.
4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
5 while taking a import some table get import very fast bt at perticular table it get very slow
please help me thanks in advance.......Hello,
4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
5 while taking a import some table get import very fast bt at perticular table it get very slow For the point *4* it's typically due to the CHARACTER SET conversion.
You export data in WE8ISO8859P1 and import in UTF8. In WE8ISO8859P1 characters are encoded in *1 Byte* so *1 CHAR = 1 BYTE*. In UTF8 (Unicode) characters are encoded in up to *4 Bytes* so *1 CHAR > 1 BYTE*.
For this reason you'll have to modify the length of your CHAR or VARCHAR2 Columns, or add the CHAR option (by default it's BYTE) in the column datatype definition of the Tables. For instance:
VARCHAR2(100 CHAR)The NLS_LENGTH_SEMANTICS parameter may be used also but it's not very well managed by export/Import.
So, I suggest you this:
1. set NLS_LENGTH_SEMANTICS=CHAR on your target database and restart the database.
2. Create from a script all your Tables (empty) on the target database (without the indexes and constraints).
3. Import the datas to the Tables.
4. Import the Indexes and constraints.You'll have more information on the following Note of MOS:
Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]For the point *5* it may be due to the conversion problem you are experiencing, it may also due to some special datatype like LONG.
Else, I have a question, why do you choose UTF8 on your Target database and not AL32UTF8 ?
AL32UTF8 is recommended for Unicode uses.
Hope this help.
Best regards,
Jean-Valentin -
Delta Sync taking too much time on refreshing of tables
Hi,
I am working on Smart Service Manager 3.0. I have come across a scenario where the delta sync is taking too much time.
It is required that if we update the stock quantity then the stock should be updated instantaneously.
To do this we have to refresh 4 stock tables at every sync so that the updated quantity is reflected in the device.
This is taking a lot of time (3 to 4 min) which is highly unacceptable from user perspective.
Please could anyone suggest something so that only those table get refreshed upon which the action is carried out.
For eg: CTStock table should get refreshed only If i transfer a stock and get updated accordingly
Not on any other scenario like the changing status from accept to driving or any thing other than stocks.
Thanks,
Star
Tags edited by: Michael ApplebyHi fiontan,
Thanks a lot for the response!!!
Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
Does it save any time by using the print() method??
The place where the delay is occurring is the wile loop shown below:
while(allitems.hasMoreElements()){
String aRow = "";
XDItem item = (XDItem)allitems.nextElement();
for(int i =0 ; i < props.length; i++){
String value = item.getStringValue(props);
if(value == null || value.equalsIgnoreCase("null"))
value = "";
if(i == 0)
aRow = value;
else
aRow += ("\t" + value);
startTime1 = System.currentTimeMillis();
System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
bufferWrt.write(aRow.toCharArray());
out.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
bufferWrt.newLine();
startTime2 = System.currentTimeMillis();
System.out.println("time here is--after-writing to buffer : " +startTime2);
What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
thanks in advance -
Code taking too much time to output
Following code is taking too much time to execute . (some time giving Time_out )
ind = sy-tabix.
SELECT SINGLE * FROM mseg INTO mseg
WHERE bwart = '102' AND
lfbnr = itab-mblnr AND
ebeln = itab-ebeln AND
ebelp = itab-ebelp.
IF sy-subrc = 0.
DELETE itab INDEX ind.
CONTINUE.
Is there any other way to write this code to reduce the output time.
ThanksHi,
I think you are executing this code in a loop which is causing the problem. The rule is "Never put SELECT statements inside a loop".
Try to rewrite the code as follows:
* Outside the loop
SELECT *
from MSEG
into table lt_mseg
for all entries in itab
where bwart = '102' AND
lfbnr = itab-mblnr AND
ebeln = itab-ebeln AND
ebelp = itab-ebelp.
Then inside the loop, do a READ on the internal table
Loop at itab.
read table lt_mseg with key bwart = '102'. "plus other conditions
if sy-subrc ne 0.
delete itab. "index is automatically determined here from SY-TABIX
endif.
endloop.
I think this should optimise performance. You can check your code's performance using SE30 or ST05.
Hope this helps! Please revert if you need anything else!!
Cheers,
Shailesh.
Always provide feedback for helpful answers! -
Report taking too much time in the portal
Hi freiends,
we have developed a report on the ods,and we publish the same on the portal.
the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
is there any way to sort out this issue,like can we send the report to the individual user's mail id
so that they can not log in to the portal
or can we create the same report on the cube.
what could be the main difference if the report made on the cube or ods?
please help me
thanks in advance
sridathHi
Try this to improve performance of query
Find the query Run-time
where to find the query Run-time ?
557870 'FAQ BW Query Performance'
130696 - Performance trace in BW
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memory will decrease the loading time of the report.
Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
Open the Aggregates...and observe VALUATION and USAGE columns.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
In usage column,we will come to know how far the aggregate has been used in query.
Thus we can check the performance of the aggregate.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
performance ISSUE related to AGGREGATE
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
202469 - Using aggregate check tool
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
6
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Business Intelligence Journal Improving Query Performance in Data Warehouses
http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Assign points if useful
Cheers
SM -
Client import taking too much time
hi all,
i am importing a client , i it has complete copy table 19,803 of 19,803 but for last four hours its status is processing
scc3
Target Client 650
Copy Type Client Import Post-Proc
Profile SAP_CUST
Status Processing...
User SAP*
Start on 24.05.2009 / 15:08:03
Last Entry on 24.05.2009 / 15:36:25
Current Action: Post Processing
- Last Exit Program RGBCFL01
Transport Requests
- Client-Specific PRDKT00004
- Texts PRDKX00004
Statistics for this Run
- No. of Tables 19803 of 19803
- Deleted Lines 7
- Copied Lines 0
sm50
1 DIA 542 Running Yes SAPLTHFB 650 SAP*
7 BGD 4172 Running Yes 11479 RGTBGD23 650 SAP* Sequential Read D010INC
sm66
Server No. Type PID Status Reason Sem Start Error CPU Time User Report Action Table
prdsap_PRD_00 7 BTC 4172 Running Yes 11711 SAP* RGTBGD23 Sequential Read D010INC
plz guide me why it is taking too much time , while it has finished most of the things
best regard
KhanThe import is in post processing. It digs through all the documents and adapts them to the new client. Most of the tables in the application area have a "MANDT" (= client) field which needs to be changed. Depending of the size of the client this can take a huge amount of time.
You can try to improve the speed by updating the table statistics for table D010INC.
Markus -
Importing is taking too much time (2 DAYS)
Dear All,
I'm Importing below support packages together in a queue @ SAP Solution manger 4 .
SAPKB70015 Basis Support Package 15 for 7.00
SAPKA70015 ABA Support Package 15 for 7.00
SAPKITL426 ST 400: Patch 0016, CRT for SAPKB70015
SAPKIBIIP6 BI_CONT 703: patch 0006
SAPKIBIIP7 BI_CONT 703: patch 0007
SAPKIBIIP8 BI_CONT 703: patch 0008
SAPK-40010INCPRXRPM CPRXRPM 400: patch 0010
SAPK-40011INCPRXRPM CPRXRPM 400: patch 0011
SAPK-40012INCPRXRPM CPRXRPM 400: patch 0012
SAPKIPYJ7E PI_BASIS 2005_1_700: patch 0014
SAPKW70016 BW Support Package 16 for 7.00
importing is taking too much time (2 DAYS) in main import phase, I have seen SLOG, there are many rows " I am waiting 1 sec and 6 sec . and also checked transaction code STMS all support packages imported except one support package "SAPKW70016 "
Please advice.
Best Regards'
HEHello Mohan,
The DBTABLOG table does get large, the best is to switch off logging. If that's not possible, increase the frequency of your delete job, also explore one more alternative have a look at the archival object: BC_DBLOGS, you could archive old records (in accordance with your customer's data retention policies) to reduce the size of the table.
Also, have a look at the following notes, they will advise you on how to improve the performance of your delete job:
Note 531923 - Audit Trail: Indexes on table DBTABLOG
Note 579980 - Table logs: Performance during access to DBTABLOG
Regards,
Siddhesh -
Combining clob taking too much time - need help
Hello,
I want to append clob values into another clob value like
declare
v_temp clob;
v_c1_temp clob;
cursor c1 as select clob_data from t;
-- here t table contains clob_data column holding clob data
-- t table have more then 1500 rows
begin
v_temp := '';
open c1 into v_c1_temp;
loop
v_temp := v_temp + v_c1_temp; --------- combining clob (concat may be used)
end loop;
close(c1);
end;
above part also repeats in outer loop (not shown here)
my problem is :- it is taking too much time - you dont believe 90% more time due to that concatenation.
is it any smart way to do same thing?
It will be very helpful to me.Hi ,
You should also always try and have the latest BI content patch installed but I don't think this is the problem. It seems that there
are alot of objects to collect. Under 'grouping' you can select the option 'only necessary objects', please check if you can
use this option to install the objects that you need from content.
Best Regards,
Des. -
Matview refresh taking too much time.....
Hello All,
I am trying to create Matview using db link, source table contain 30 crores of data.
And I am picking only 2.5 crores of data from source,and i have used where condition "WHERE ALLOCATION_DATE BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MM'),-1) AND ADD_MONTHS(LAST_DAY(TRUNC(SYSDATE)),6)".
But it is taking too much time to refresh and i am using atomic_refresh=> false.
Source table contains following columns
ASSIGNMENT#
PROJECT#
ALLOCATION_DATE
EFFORTS
WEEKEND_LEAVE_FLAG
ANU_YEAR
LAST_UPDATE
ALLOCATION_EFFORTS
and source table is partitioned on ANU_YEAR.
Can any one Please tell me how to create fast refresh matview.953975 wrote:
Hello All,
I am trying to create Matview using db link, source table contain 30 crores of data.
And I am picking only 2.5 crores of data from source,and i have used where condition "WHERE ALLOCATION_DATE BETWEEN ADD_MONTHS(TRUNC(SYSDATE,'MM'),-1) AND ADD_MONTHS(LAST_DAY(TRUNC(SYSDATE)),6)".
But it is taking too much time to refresh and i am using atomic_refresh=> false.
Source table contains following columns
ASSIGNMENT#
PROJECT#
ALLOCATION_DATE
EFFORTS
WEEKEND_LEAVE_FLAG
ANU_YEAR
LAST_UPDATE
ALLOCATION_EFFORTS
and source table is partitioned on ANU_YEAR.
Can any one Please tell me how to create fast refresh matview.Please read {message:id=9360003} and follow the advice there.
Also, please use international English. Crore is not part of International English - use thousands, millions etc. -
Spatial query with sdo_aggregate_union taking too much time
Hello friends,
the following query taking too much time for execution.
table1 contains around 2000 records.
table2 contains 124 rows
SELECT
table1.id
, table1.txt
, table1.id2
, table1.acti
, table1.acti
, table1.geom as geom
FROM
table1
WHERE
sdo_relate
table1.geom,
select sdo_aggr_union(sdoaggrtype(geom, 0.0005))
from table2
,'mask=(ANYINTERACT) querytype=window'
)='TRUE'
I am new in spatial. trying to find out list of geometry which is fall within geometry stored in table2.
ThanksHi Thanks lot for your reply.
But It is not require to use sdo_aggregate function to find out whether geomatry in one table is in other geomatry..
Let me give you clear picture....
What I trying to do is, tale1 contains list of all station (station information) of state and table2 contains list of area of city. So I want to find out station which is belonging to city.
For this I thought to get aggregation union of city area and then check for any interaction of that final aggregation result with station geometry to check whether it is in city or not.
I hope this would help you to understand my query.
Thanks
I appreciate your efforts. -
Archive Delete job taking too much time - STXH Sequential Read
Hello,
We have been running Archive sessions in our production system in last couple of months. We use SARA and selecting the appropriate variants for WRITE, DELETE and STORAGE options.
Currently we use the Archive object FI_DOCUMNT and the write job is finished as per the normal time (5 hrs based on the selection criteria). After that normally the delete job is used to complete in 1hr or less than 2hrs always (in last 3 months).
But in last few days the delete job is taking too much to complete (around 8 - 10hrs) when I monitor the system found that the Sequential Read for table STXH is taking too much time to read and it seems this is the cause.
Could you please provide a solution for the same, so that the job will run faster as earlier.
Thanks for your time
ShylHi Juan,
After the statistics run the performance is quite good. Now the job getting finished as expected.
Thanks. Problem solved
Shyl -
Auto Invoice Program taking too much time : problem with update sql
Hi ,
Oracle db version 11.2.0.3
Oracle EBS version : 12.1.3
Though we have a SEV-1 SR with oracle we have not been able to find much success.
We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it. I am attaching the explain plan for the for same. Its an update query. Please guide.
Plan
UPDATE STATEMENT ALL_ROWSCost: 0 Bytes: 124 Cardinality: 1
50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
27 FILTER
26 HASH JOIN Cost: 8,937,633 Bytes: 4,261,258,760 Cardinality: 34,364,990
24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413 Bytes: 446,744,870 Cardinality: 34,364,990
23 SORT UNIQUE Cost: 8,618,413 Bytes: 4,042,339,978 Cardinality: 34,364,990
22 UNION-ALL
9 FILTER
8 SORT GROUP BY Cost: 5,643,052 Bytes: 3,164,892,625 Cardinality: 25,319,141
7 HASH JOIN Cost: 1,640,602 Bytes: 32,460,436,875 Cardinality: 259,683,495
1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
6 HASH JOIN Cost: 853,567 Bytes: 22,544,143,440 Cardinality: 214,706,128
4 HASH JOIN Cost: 536,708 Bytes: 2,357,000,550 Cardinality: 29,835,450
2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008 Bytes: 1,163,582,550 Cardinality: 29,835,450
3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314 Bytes: 1,193,526,000 Cardinality: 29,838,150
5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951 Bytes: 3,123,197,116 Cardinality: 120,122,966
21 FILTER
20 SORT GROUP BY Cost: 2,975,360 Bytes: 877,447,353 Cardinality: 9,045,849
19 HASH JOIN Cost: 998,323 Bytes: 17,548,946,769 Cardinality: 180,916,977
13 VIEW VIEW AR.index$_join$_027 Cost: 108,438 Bytes: 867,771,256 Cardinality: 78,888,296
12 HASH JOIN
10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206 Bytes: 867,771,256 Cardinality: 78,888,296
11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322 Bytes: 867,771,256 Cardinality: 78,888,296
18 HASH JOIN Cost: 748,497 Bytes: 3,281,713,302 Cardinality: 38,159,457
14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
17 HASH JOIN Cost: 519,713 Bytes: 1,969,317,900 Cardinality: 29,838,150
15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822 Bytes: 716,115,600 Cardinality: 29,838,150
16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847 Bytes: 1,253,202,300 Cardinality: 29,838,150
25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552 Bytes: 5,158,998,615 Cardinality: 46,477,465
41 SORT GROUP BY Bytes: 75 Cardinality: 1
40 FILTER
39 MERGE JOIN CARTESIAN Cost: 11 Bytes: 75 Cardinality: 1
35 NESTED LOOPS Cost: 8 Bytes: 50 Cardinality: 1
32 NESTED LOOPS Cost: 5 Bytes: 30 Cardinality: 1
29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3 Bytes: 22 Cardinality: 1
28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2 Cardinality: 1
31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2 Bytes: 133,114,520 Cardinality: 16,639,315
30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 20 Cardinality: 1
33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2 Cardinality: 1
38 BUFFER SORT Cost: 9 Bytes: 25 Cardinality: 1
37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 25 Cardinality: 1
36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
49 SORT GROUP BY Bytes: 48 Cardinality: 1
48 FILTER
47 NESTED LOOPS
45 NESTED LOOPS Cost: 7 Bytes: 48 Cardinality: 1
43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4 Bytes: 20 Cardinality: 1
42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3 Cardinality: 1
44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 28 Cardinality: 1
As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
RegardsHi Paul, My bad. I am sorry I missed it.
Query as below :
UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
I understand that this could be a seeded query but my attempt is to tune it.
Regards -
Delete query taking too much time
Hi All,
my delete query is taking too much time. around 1hr 30 min for 1.5 lac records.
Though I have dropped mv log on the table & disabled all the triggers on it.
Moreover deletion is based on primary key .
delete from table_name where primary_key in (values)
above is dummy format of my query.
can anyone please tell me what could be other reason that query is performing that slow.
Is there anything to check in DB other than triggers,mv log,constraints in order to improve the performance?
Please reply asap.Delete is the most time consuming operation, as the whole record data has to be stored at the undo segments. On the other hand, there is a part of the query used to select records to delete that probably is adding an extra overhead to the process, the in (values) clause. It would be nice on your side to post another dummy from this (values) clause. I could figure out this is a subquery, and in order for you to obtain this list you have to run a inefficient query.
You can gather the execution plan so you can see where the most heavy part of th query is. This way a better tuning approach and a more accurate diagnostic can be issued.
~ Madrid. -
hi
The following query is taking too much time (more than 30 minutes), working with 11g.
The table has three columns rid, ida, geometry and index has been created on all columns.
The table has around 5,40,000 records of point geometries.
Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
regardsI have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
a.ida='CORD' AND
b.idat='CORD' AND
a.rid !=b.rid AND
sdo_equal(a.geometry, b.geometry)='TRUE'
ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
[ c o d e ]
<code/results here>
[ / c o d e ]
that way the original formatting is kept and it makes things much easier to read.
Regards,
Stefan
Maybe you are looking for
-
Can I migrate data from "old" user account to new user after clean install?
I've just done a clean install of Snow Leopard on a MacBook Pro (I erased system drive before install). I've upgraded SL to 10.6.2 using a generic administrator account. And I have an exact bootable clone of my old data (SuperDuper backup on external
-
Customized drill-down report cannot go to line item level (FGI4 and FGI1)
Hi Expert, I created a customized drill-down report using New GL features, as follows: FGI4 create form, with reference to form: 0SAPBLNCE-01 FGI1 create report for the form created. After that I set some things like characteristic, variables, and ou
-
Is a TSU NOTIFICATION sufficient for encryption documentation?
I'm using an open source library for my encryption. Can I just upload my TSU NOTIFICATION as proof that I've notified BSI / NSA for the encryption documentation?
-
PSE6 - HOw to collect photos together in one folder
I have managed to get photos scattered across several folders, some on my laptop and some on an external hard drive. I have managed to get them all into one catalogue, but what is the best way to physically move them all into one folder so it canbe b
-
Hi, I have a webi report wherein I use the custom SQL with left join. However when I run the query against the database( SQL Server 2005) I get the left join to work . But when I run the report using the same SQL I do not see the left join implement