Catbundle.sql taking too much time while running it for CPU-April2011
Hi ,
I am on oracle 11.1.0.7.0 and I am applying the CPU patch April 2011.
OS is Solaris SPARC.
As part of that , i am running the catbundle.sql.
currently , for last 180 mins , the catbundle.sql is at this state :
SQL> PROMPT Processing Oracle Java Supplied Packages...
Processing Oracle Java Supplied Packages...
SQL> ALTER SESSION SET current_schema = sys;
Session altered.
SQL> @?/rdbms/admin/initcdc.sql
SQL> Rem
SQL> Rem $Header: initcdc.sql 15-mar-2006.08:20:07 mbrey Exp $
SQL> Rem
SQL> Rem initcdc.sql
SQL> Rem
SQL> Rem Copyright (c) 2000, 2006, Oracle. All rights reserved.
SQL> Rem
SQL> Rem NAME
SQL> Rem initcdc.sql - script used to load CDC jar files into the database
SQL> Rem
SQL> Rem DESCRIPTION
SQL> Rem <short description of component this file declares/defines>
SQL> Rem
SQL> Rem NOTES
SQL> Rem script must be run as SYS
SQL> Rem
SQL> Rem MODIFIED (MM/DD/YY)
SQL> Rem mbrey 03/15/06 - bug 5092790 add datapump registration
SQL> Rem pabingha 02/25/03 - fix undoc interfaces
SQL> Rem wnorcott 03/14/02 - bug-2239726 disable triggers.
SQL> Rem wnorcott 01/31/02 - function 'active' return 0 or 1.
SQL> Rem wnorcott 01/30/02 - disable CDC triggers, CREATE_CHANGE_TABLE re-enables.
SQL> Rem wnorcott 06/26/01 - rid trailing slash. As per Mark Jungermann
SQL> Rem gviswana 05/25/01 - CREATE OR REPLACE SYNONYM
SQL> Rem jgalanes 11/17/00 - for Import/Export grant execute on util to
SQL> REM SELECT_CATLOG_ROLE
SQL> Rem wnorcott 09/07/00 - new loadjava syntax for performance.
SQL> Rem wnorcott 07/18/00 - rid LOGMNR_UID$.clientid
SQL> Rem wnorcott 06/28/00 - move logmnr_dict view here
SQL> Rem wnorcott 03/28/00 - fix trigger install
SQL> Rem wnorcott 03/27/00 - Install change table triggers
SQL> Rem mbrey 01/26/00 - script to load CDC jars
SQL> Rem mbrey 01/26/00 - Created
SQL> Rem
SQL> call sys.dbms_java.loadjava('-v -f -r -s -g public rdbms/jlib/CDC.jar');It has run the @?/rdbms/admin/initcdc.sql script and now its not proceeding next.. I am not sure if its stuck or still running.. How do i find that..
what should i do ?
*PS : Ive read this note :Script Fails At Loadjava With ORA-03113 and ORA-03114 [ID 358232.1] But its when it errors out , for me its not erroring out also..*
Thanks
Kk
Edited by: Kk on May 25, 2011 2:41 AM
just an update :
I sat watching it for 3 hours.. then i though of stopping it.. pressed CTRL-Z and boom , it just took me to SQL showing that it had completed...
So , if someone else faces this issue , press CTRL-Z after some 5-10 minutes it will show that its running well.
Regards
Kk
Edited by: Kk on May 25, 2011 3:55 AM
Similar Messages
-
Simple APD is taking too much time in running
Hi All,
We have one APD created on our developement system which is taking too much time in running.
This APD is fetching data from a Query having only 1200 records and directly putting into a master attribute.
The Query is running fine in RSRT transaction and giving output within 5 seconds but in APD if I do display data over Query it is taking too much time.
The APD is taking arrount 1.20 hours in running.
Thanks in advance!!Hi,
When a query runs in APD it normally takes much, much longer than it takes in RSRT. Run times such as what you are saying (5secs in RSRT and >1.5 hrs in APD) are quite normal; I've seen them in some of my queries running for several hours in APD as well.
You just have to wait for it to complete.
Regards,
Suhas -
Report is taking too much time when running from parameter form
Dear All
I have developed report in oracle reports bulider 10g. while running it from report builder data is coming very fast.
But, If it is running from parameter form it is taking too much time to format report in PDF.
Please suggest any configuration or setting if anybody is having Idea.
ThanksHi,
The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
If you still cannot determine the difference then trace both sessions.
Rod West -
Update SQL taking too much time.. hw to optimised?
Hi,
here is the query; this is taking time more than 3hrs to execute.
UPDATE process_sub_step proc1
SET process_indicator = '0',
process_result = 'SC'
WHERE EXISTS (SELECT *
FROM link_trans link1
WHERE link1.tr_leu_id = proc1.tr_leu_id AND link1.TYPE = 'S')
AND process_substep_id = 'TR_02_03'
AND process_indicator = '2'
The above query is taking more than 3hrs to execuite ...!!!!!!!
Record count in the both table is more than 10L.
The column used in join condition has index.
Execution Plan is : -
UPDATE STATEMENT ALL_ROWSCost: 6
5 UPDATE PROCESS_SUB_STEP
4 NESTED LOOPS SEMI Cost: 6 Bytes: 23 Cardinality: 1
1 INDEX RANGE SCAN INDEX PROC_SSTL_COMB_IDX Cost: 3 Bytes: 16 Cardinality: 1
3 TABLE ACCESS BY INDEX ROWID TABLE EGR.LINK_TRANS_CONS Cost: 2 Bytes: 378 Cardinality: 54
2 INDEX RANGE SCAN INDEX LTC_TYPE_IDX Cost: 1 Cardinality: 2
What approach should I take to improve the performance......!!!!
But same query if i write in select mode like below; it display result very quick -
select *
from
process_sub_step proc1
WHERE EXISTS (SELECT *
FROM link_trans link1
WHERE link1.tr_leu_id = proc1.tr_leu_id AND link1.TYPE = 'S')
AND process_substep_id = 'TR_02_03'
AND process_indicator = '2';
Thanks...> here is the query; this is taking time more than 3hrs to execute.
Obviously. For every single row in the process_sub_step matching the filter criteria, a sub-select has to be executed on the link_trans table.
So if that filter criteria identified a million rows, a million exist SELECTs have to be executed. Even a unique index scan will take accumulate into a large overhead. In your case, it is an index range scan that is performed which is slower than a unique scan.
Btw, there is no joining done here - just a nested loop process. I.e. read a row at a time from process_sub_step using a loop, and in each loop iteration, run a SELECT against link_trans.
To optimise this specific SQL means having to do less I/O (process less rows):
a) selecting less rows from process_sub_step to delete
b) making the select filter on link_trans more restrictive
Neither is likely practical. Which means you need to approach this problem with another method and not an EXIST sub-select. -
Query taking too much time when running through discover
Hi
I have created report with sql query by creating custom folder in oracle discover desktop. Query is using parameter with sys_context. When report is executed from discover it takes more than 5 minutes and same query is executed in 30 seconds when executed in database through toad.
Pls. let me know what could be the reason for this?
ThanksHi,
The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
If you still cannot determine the difference then trace both sessions.
Rod West -
Client import taking too much time
hi all,
i am importing a client , i it has complete copy table 19,803 of 19,803 but for last four hours its status is processing
scc3
Target Client 650
Copy Type Client Import Post-Proc
Profile SAP_CUST
Status Processing...
User SAP*
Start on 24.05.2009 / 15:08:03
Last Entry on 24.05.2009 / 15:36:25
Current Action: Post Processing
- Last Exit Program RGBCFL01
Transport Requests
- Client-Specific PRDKT00004
- Texts PRDKX00004
Statistics for this Run
- No. of Tables 19803 of 19803
- Deleted Lines 7
- Copied Lines 0
sm50
1 DIA 542 Running Yes SAPLTHFB 650 SAP*
7 BGD 4172 Running Yes 11479 RGTBGD23 650 SAP* Sequential Read D010INC
sm66
Server No. Type PID Status Reason Sem Start Error CPU Time User Report Action Table
prdsap_PRD_00 7 BTC 4172 Running Yes 11711 SAP* RGTBGD23 Sequential Read D010INC
plz guide me why it is taking too much time , while it has finished most of the things
best regard
KhanThe import is in post processing. It digs through all the documents and adapts them to the new client. Most of the tables in the application area have a "MANDT" (= client) field which needs to be changed. Depending of the size of the client this can take a huge amount of time.
You can try to improve the speed by updating the table statistics for table D010INC.
Markus -
While condition is taking too much time
I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
public static GroupHierEntity load(Connection con)
throws SQLException
internalCustomer=false;
String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
{ internalCustomer=true;}
// System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
// show all groups to internal customers and only their customer groups for external customers
if (internalCustomer) {
stmtLoad = con.prepareStatement(sqlLoad);
ResultSet rs = stmtLoad.executeQuery();
return new GroupHierEntity(rs); }
else
stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
stmtLoadExternal.setString(1, customerNameOfLogger);
stmtLoadExternal.setString(2, customerNameOfLogger);
// System.out.println("***** sql " +sqlLoadExternal);
ResultSet rs = stmtLoadExternal.executeQuery();
return new GroupHierEntity(rs);
GroupHierEntity ge = GroupHierEntity.load(con);
while(ge.next())
lvl = ge.getInt("lvl");
oid = ge.getLong("oid");
name = ge.getString("name");
if(internalCustomer) {
if (lvl == 2)
int i = getAlphaIndex(name);
super.setAppendRoot(alphaIndex);
gn = new GroupListDataNode(lvl+1,oid,name);
gn.setSelectable(true);
this.addNode(gn);
count++;
System.out.println("*** count "+ count);
ge.close();
========================
Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
while(ge.next())
{count++;}
Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
Thanks ,
balaI tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
I have similar queries that return some 800 rows , that only takes 1 sec.
I have doubt on resultset.next(). Any other alternative ? -
Taking too much time in Rules(DTP Schedule run)
Hi,
I am Scheduling the DTP which have filters to minimize the load data.
when i run the DTP it is taking too much time in the "rules" (i can see the DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
here it is consuming too much time in Rules Mapping.
what is the problem and any solutions please...
regards,
sreeHi,
Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
Also check ur DTP batch settings, ie how many no. of background processes used to perform DTP, Job class.
U can find these :
goto DTP, select goto menu and select "Settings for Batch Manager".
In the screen increase no of Processes from 3 to higher no(max 9).
ChaNGE job class to 'A'.
If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
Change these settings and run ur DTP one more time.
U can observer the difference.
Reddy -
Auto Invoice Program taking too much time : problem with update sql
Hi ,
Oracle db version 11.2.0.3
Oracle EBS version : 12.1.3
Though we have a SEV-1 SR with oracle we have not been able to find much success.
We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it. I am attaching the explain plan for the for same. Its an update query. Please guide.
Plan
UPDATE STATEMENT ALL_ROWSCost: 0 Bytes: 124 Cardinality: 1
50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
27 FILTER
26 HASH JOIN Cost: 8,937,633 Bytes: 4,261,258,760 Cardinality: 34,364,990
24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413 Bytes: 446,744,870 Cardinality: 34,364,990
23 SORT UNIQUE Cost: 8,618,413 Bytes: 4,042,339,978 Cardinality: 34,364,990
22 UNION-ALL
9 FILTER
8 SORT GROUP BY Cost: 5,643,052 Bytes: 3,164,892,625 Cardinality: 25,319,141
7 HASH JOIN Cost: 1,640,602 Bytes: 32,460,436,875 Cardinality: 259,683,495
1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
6 HASH JOIN Cost: 853,567 Bytes: 22,544,143,440 Cardinality: 214,706,128
4 HASH JOIN Cost: 536,708 Bytes: 2,357,000,550 Cardinality: 29,835,450
2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008 Bytes: 1,163,582,550 Cardinality: 29,835,450
3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314 Bytes: 1,193,526,000 Cardinality: 29,838,150
5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951 Bytes: 3,123,197,116 Cardinality: 120,122,966
21 FILTER
20 SORT GROUP BY Cost: 2,975,360 Bytes: 877,447,353 Cardinality: 9,045,849
19 HASH JOIN Cost: 998,323 Bytes: 17,548,946,769 Cardinality: 180,916,977
13 VIEW VIEW AR.index$_join$_027 Cost: 108,438 Bytes: 867,771,256 Cardinality: 78,888,296
12 HASH JOIN
10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206 Bytes: 867,771,256 Cardinality: 78,888,296
11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322 Bytes: 867,771,256 Cardinality: 78,888,296
18 HASH JOIN Cost: 748,497 Bytes: 3,281,713,302 Cardinality: 38,159,457
14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993 Bytes: 402,499,500 Cardinality: 20,124,975
17 HASH JOIN Cost: 519,713 Bytes: 1,969,317,900 Cardinality: 29,838,150
15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822 Bytes: 716,115,600 Cardinality: 29,838,150
16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847 Bytes: 1,253,202,300 Cardinality: 29,838,150
25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552 Bytes: 5,158,998,615 Cardinality: 46,477,465
41 SORT GROUP BY Bytes: 75 Cardinality: 1
40 FILTER
39 MERGE JOIN CARTESIAN Cost: 11 Bytes: 75 Cardinality: 1
35 NESTED LOOPS Cost: 8 Bytes: 50 Cardinality: 1
32 NESTED LOOPS Cost: 5 Bytes: 30 Cardinality: 1
29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3 Bytes: 22 Cardinality: 1
28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2 Cardinality: 1
31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2 Bytes: 133,114,520 Cardinality: 16,639,315
30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 20 Cardinality: 1
33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2 Cardinality: 1
38 BUFFER SORT Cost: 9 Bytes: 25 Cardinality: 1
37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 25 Cardinality: 1
36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
49 SORT GROUP BY Bytes: 48 Cardinality: 1
48 FILTER
47 NESTED LOOPS
45 NESTED LOOPS Cost: 7 Bytes: 48 Cardinality: 1
43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4 Bytes: 20 Cardinality: 1
42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3 Cardinality: 1
44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2 Cardinality: 1
46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3 Bytes: 28 Cardinality: 1
As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
RegardsHi Paul, My bad. I am sorry I missed it.
Query as below :
UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
I understand that this could be a seeded query but my attempt is to tune it.
Regards -
I having issue with my Iphone 4 while playing music its taking too much time to play
I am using Iphone which is taking too much time to play music & some time its shows one album cover and playing others song please help and let me know whats the issue
Hello Sanjay,
I would recommend steps 1, 3, and 5 from our iPhone Troubleshooting Assistant found here: http://www.apple.com/support/iphone/assistant/phone/#section_1
Here is step 1 to get you started.
Restart iPhone
To restart iPhone, first turn iPhone off by pressing and holding the Sleep/Wake button until a red slider appears. Slide your finger across the slider and iPhone will turn off after a few moments.
Next, turn iPhone on by pressing and holding the Sleep/Wake button until the Apple logo appears.
Is iPhone not responding? To reset iPhone, press and hold the Sleep/Wake button and the Home button at the same time for at least 10 seconds, until the Apple logo appears.
If your device does not turn on or displays a red battery icon, try recharging next.
Take care,
Sterling -
Taking too much time incollecting in business content activation
Hi all,
I am collecting business content object for activation. I have selected 0fiAA_cha object,while cllecting in the activation but it is taking too much time and then it asks for source
system authorisation and then throws error maximum run time exceded. i have selected data flow before there.
What can be the reason for it.
Please help..Hi ,
You should also always try and have the latest BI content patch installed but I don't think this is the problem. It seems that there
are alot of objects to collect. Under 'grouping' you can select the option 'only necessary objects', please check if you can
use this option to install the objects that you need from content.
Best Regards,
Des. -
Why it is taking too much time to kill the process?
Hi All,
Today,one of my user ran the calc script and the process is taking too much time, then i kill the process. I am wondering about one thing here even it is taking too long to kill the process, generally it will not take more than 2 sec. I did this through EAS.
After that I ran Maxl statement
alter system kill request 552599515;
there is no use at all.
Please reply if you have any solutions to kill this process.
Thanks in advance.
Ram.Hi Ram,
1. Firstly, How much time does your calculation scripts normally run.
2. While it was running, you can go to the logs and monitor where exactly the script is taking time .
3. Sometimes, it does take time to cancel a transaction ( as it might be in between).
4. Maxl is always good to kill ,as you did . It should be succesful . Check the logs what it says ,and also the "sessions" which might say "terminating" and finish it off.
5. If nothing works ,and in the worst case scenarion , if its taking time without doing anything. Then log off all the users and stop the databas and start it .
6. Do log off all the users, so that you dont corrupt any filter related sec file.
Be very careful , if its production ( and I assume you have latest backups)
Sandeep Reddy Enti
HCC
http://hyperionconsultancy.com/ -
Full DTP taking too much time to load
Hi All ,
I am facing an issue where a DTP is taking too much time to load data from DSO to Cube via PC and also while manually running it.
There are 6 such similar DTP's which load data for different countries(different DSO's and Cubes as source and target respectively) for last 7 days based on GI Date. All the DTP's are pulling almost same no. of records and finish within 25-30 min. But only one DTP takes around 3 hours. The problem started couple of days back.
I have change the Parallel processes from 3->4->5 and packet size from 50,000->10,000->1.00.000 but no improvement. Also want to mention that all the source DSO's and target Cubes have the same structure. All the transformations have Field Routines and End Routines.
Can you all please share some pointers which can help.
Thanks
PrateekHI Raman ,
This is what I get when I check the report. Can this be causing issues as 2 rows have % >= 100
ETVC0006 /BIC/DETVC00069 rows: 1.484 ratio: 0 %
ETVC0006 /BIC/DETVC0006C rows: 15.059.600 ratio: 103 %
ETVC0006 /BIC/DETVC0006D rows: 242 ratio: 0 %
ETVC0006 /BIC/DETVC0006P rows: 66 ratio: 0 %
ETVC0006 /BIC/DETVC0006T rows: 156 ratio: 0 %
ETVC0006 /BIC/DETVC0006U rows: 2 ratio: 0 %
ETVC0006 /BIC/EETVC0006 rows: 14.680.700 ratio: 100 %
ETVC0006 /BIC/FETVC0006 rows: 0 ratio: 0 %
ETVC0007 rows: 13.939.200 density: 0,0 % -
Report taking too much time in the portal
Hi freiends,
we have developed a report on the ods,and we publish the same on the portal.
the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
is there any way to sort out this issue,like can we send the report to the individual user's mail id
so that they can not log in to the portal
or can we create the same report on the cube.
what could be the main difference if the report made on the cube or ods?
please help me
thanks in advance
sridathHi
Try this to improve performance of query
Find the query Run-time
where to find the query Run-time ?
557870 'FAQ BW Query Performance'
130696 - Performance trace in BW
This info may be helpful.
General tips
Using aggregates and compression.
Using less and complex cell definitions if possible.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
By using T-codes ST03 or ST03N
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
/people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
/people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
Try table rsddstats to get the statistics
Using cache memory will decrease the loading time of the report.
Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
Also try
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
Open the Aggregates...and observe VALUATION and USAGE columns.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
In usage column,we will come to know how far the aggregate has been used in query.
Thus we can check the performance of the aggregate.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
performance ISSUE related to AGGREGATE
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
You can go to T-Code DB20 which gives you all the performance related information like
Partitions
Databases
Schemas
Buffer Pools
Tablespaces etc
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
202469 - Using aggregate check tool
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
6
Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Business Intelligence Journal Improving Query Performance in Data Warehouses
http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Assign points if useful
Cheers
SM -
Taking too much time to connect SAP B1
Dear All,
My Addon is running successfully in one server. While the same Addon is taking too much time to connect ( near oCompany.Connect).
Can anyone give me any idea regarding this..
Thanks in advance..
Regards
SanjayDear Petr,
Thanks for your answer.
Let me do the thing...
I will get back to you soon...
Thanks
Regards
Sanjay
Maybe you are looking for
-
HT1918 My iPhone won't let me download or update anything???
My iPhone won't let me download or update any apps, just keeps going to my account pages, asks me to put in my card details then says there's a problem or it's declined even when I try to download or update free apps, I have had the day Apple ID and
-
Not a slide show, but a group of photos lined up and scrolling
I want to take about 60 photos and line them up, and have them scroll horizontally. Is there an easy way to do this? Id also like for the user to be able to put their mouse over the pictures as they scroll and stop the scrolling. Also, Id like for th
-
Changing currency twice in BEX
Hi guys, I have a report in which reporting is done on GBP. However, there is a user reqt for converting GBP to RUB and then RUB to USD. I have used Currecncy conversion variable to convert GBP to RUB in a Selection which is hidden in the report.This
-
Solman 4.0 basic configuration error
Hi Experts, While i'm performing initial configuration part 2 (auto configuration) in solman 4.0, i'm getting some error. Plz help me to fix Basic Configuration Error Message no : IMG_FASTCONF015 Diagnosis Solution Manager Basic Configuration Error
-
Hi i work with Oracle Jdeveloper 11g for create adf faces web application i want to print all data of a Rich table for this purpose i used a command button Component in my page and then in my command Button used showPrintablePageBehavior but,all data