Auto Invoice Program taking too much time : problem with update sql

Hi ,
Oracle db version 11.2.0.3
Oracle EBS version : 12.1.3
Though we have a SEV-1 SR with oracle we have not been able to find much success.
We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it.  I am attaching the explain plan for the for same.  Its an update query. Please guide.
Plan
UPDATE STATEMENT  ALL_ROWSCost: 0  Bytes: 124  Cardinality: 1 
  50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
  27 FILTER 
  26 HASH JOIN  Cost: 8,937,633  Bytes: 4,261,258,760  Cardinality: 34,364,990 
  24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413  Bytes: 446,744,870  Cardinality: 34,364,990 
  23 SORT UNIQUE  Cost: 8,618,413  Bytes: 4,042,339,978  Cardinality: 34,364,990 
  22 UNION-ALL 
  9 FILTER 
  8 SORT GROUP BY  Cost: 5,643,052  Bytes: 3,164,892,625  Cardinality: 25,319,141 
  7 HASH JOIN  Cost: 1,640,602  Bytes: 32,460,436,875  Cardinality: 259,683,495 
  1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993  Bytes: 402,499,500  Cardinality: 20,124,975 
  6 HASH JOIN  Cost: 853,567  Bytes: 22,544,143,440  Cardinality: 214,706,128 
  4 HASH JOIN  Cost: 536,708  Bytes: 2,357,000,550  Cardinality: 29,835,450 
  2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008  Bytes: 1,163,582,550  Cardinality: 29,835,450 
  3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314  Bytes: 1,193,526,000  Cardinality: 29,838,150 
  5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951  Bytes: 3,123,197,116  Cardinality: 120,122,966 
  21 FILTER 
  20 SORT GROUP BY  Cost: 2,975,360  Bytes: 877,447,353  Cardinality: 9,045,849 
  19 HASH JOIN  Cost: 998,323  Bytes: 17,548,946,769  Cardinality: 180,916,977 
  13 VIEW VIEW AR.index$_join$_027 Cost: 108,438  Bytes: 867,771,256  Cardinality: 78,888,296 
  12 HASH JOIN 
  10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206  Bytes: 867,771,256  Cardinality: 78,888,296 
  11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322  Bytes: 867,771,256  Cardinality: 78,888,296 
  18 HASH JOIN  Cost: 748,497  Bytes: 3,281,713,302  Cardinality: 38,159,457 
  14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993  Bytes: 402,499,500  Cardinality: 20,124,975 
  17 HASH JOIN  Cost: 519,713  Bytes: 1,969,317,900  Cardinality: 29,838,150 
  15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822  Bytes: 716,115,600  Cardinality: 29,838,150 
  16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847  Bytes: 1,253,202,300  Cardinality: 29,838,150 
  25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552  Bytes: 5,158,998,615  Cardinality: 46,477,465 
  41 SORT GROUP BY  Bytes: 75  Cardinality: 1 
  40 FILTER 
  39 MERGE JOIN CARTESIAN  Cost: 11  Bytes: 75  Cardinality: 1 
  35 NESTED LOOPS  Cost: 8  Bytes: 50  Cardinality: 1 
  32 NESTED LOOPS  Cost: 5  Bytes: 30  Cardinality: 1 
  29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3  Bytes: 22  Cardinality: 1 
  28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2  Cardinality: 1 
  31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2  Bytes: 133,114,520  Cardinality: 16,639,315 
  30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1  Cardinality: 1 
  34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 20  Cardinality: 1 
  33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2  Cardinality: 1 
  38 BUFFER SORT  Cost: 9  Bytes: 25  Cardinality: 1 
  37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 25  Cardinality: 1 
  36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2  Cardinality: 1 
  49 SORT GROUP BY  Bytes: 48  Cardinality: 1 
  48 FILTER 
  47 NESTED LOOPS 
  45 NESTED LOOPS  Cost: 7  Bytes: 48  Cardinality: 1 
  43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4  Bytes: 20  Cardinality: 1 
  42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3  Cardinality: 1 
  44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2  Cardinality: 1 
  46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 28  Cardinality: 1 
As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
Regards

Hi Paul, My bad. I am sorry I missed it.
Query as below :
UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
I understand that this could be a seeded query but my attempt is to tune it.
Regards

Similar Messages

  • Cleanup special characters Program taking too much time

    Hi Guys,
    We run a program called Z_CLEAN_SPECIAL_CHARS to clean up special characters. Normally, it takes 5-10 mins to execute this program but for some reason it took 3 hours. We killed the program and restarted the process and it took normal time to execute.
    Our team wants to investigate why this happened at the first place but not sure how and where to start. Any ideas. The log doesnt give much information.

    Hi ,
    Open your program in se38 .Set breakpoints and execute in debug mode .
    Check how many data records you are getting for cleanup and which step is taking more time .
    May be today you have more no of records to search or to clean up .
    This may also be server bandwidth issue as you may have some other jobs also running along with this having higher priority .
    If there will be any error you will get dump .
    You progarm is a zprogram so it depends on what exactly this program is doing and which tables it is accessing .
    One question how you are executing the program  (via process chain or manually)?
    Regards,
    Jaya

  • Hello guys my i phone touch is not working on edges and some touch problem and it is not taking too much time too respond it is purshecd in kuwait can i get replace in india

    hello guys my i phone touch is not working on edges and some touch problem and it is not taking too much time too respond it is purshecd in kuwait can i get replace in india it is i phone 4s

    saikumar3 wrote:
    ...it is purshecd in kuwait can i get replace in india it is i phone 4s
    No, warranty is not International and is only valid in country of purchase.

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Taking too much time to load application

    Hi,
    I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
    I have another 10g server (same version) in which the same application is loading very fast.
    When I checked the apache error logs found this :-
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
    [Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
    [Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    Please HELP ME...

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • Taking too much time in Rules(DTP Schedule run)

    Hi,
    I am Scheduling the DTP which have filters to minimize the load data.
    when i run the DTP it is taking too much time in the "rules" (i can see the  DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
    here it is consuming too much time in Rules Mapping.
    what is the problem and any solutions please...
    regards,
    sree

    Hi,
    Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
    Also check ur DTP batch settings, ie how many no. of background processes used to perform  DTP, Job class.
    U can find these :
    goto DTP, select goto menu and select "Settings for Batch Manager".
    In the screen increase no of Processes from 3 to higher no(max 9).
    ChaNGE job class to 'A'.
    If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
    Change these settings and run ur DTP one more time.
    U can observer the difference.
    Reddy

  • Taking too much time incollecting in business content activation

    Hi all,
    I am collecting business content object for activation. I have selected 0fiAA_cha object,while cllecting in the activation but it is taking too much time and then it asks for source
    system authorisation and then throws error maximum run time exceded. i have selected data flow before there.
    What can be the reason for it.
    Please help..

    Hi ,
    You should also always try and have the latest BI content patch installed but I don't think this is the problem. It seems that there
    are alot of objects to collect. Under 'grouping' you can select the option 'only necessary objects', please check if you can
    use this option to  install the objects that you need from content.
    Best Regards,
    Des.

  • Taking too much time using BufferedWriter to write to a file

    Hi,
    I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
    Thanks in advance.
    public String extractItems() throws InternalServerException{
    try{
                   String extractFileName = getExtractFileName();
                   FileWriter fileWriter = new FileWriter(extractFileName);
                   BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
                   CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
    System.out.println("Before -1");
                   CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
    System.out.println("After -1");
              PrintWriter out = new PrintWriter(bufferWrt);
    System.out.println("Before -2");
              TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
    System.out.println("After -2");
    XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
    Enumeration allitems = itemSet.allItems();
    System.out.println("the batch size : " +itemSet.getBatchSize());
    XDForm frm = itemSet.getXDForm();
    XDFormProperty[] props = frm.getXDFormProperties();
    System.out.println("Before -3");
    bufferWrt.newLine();
    long startTime ,startTime1 ,startTime2 ,startTime3;
    startTime = System.currentTimeMillis();
    System.out.println("time here is--before-while : " +startTime);
    while(allitems.hasMoreElements()){
    String aRow = "";
    XDItem item = (XDItem)allitems.nextElement();
    for(int i =0 ; i < props.length; i++){
         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --new: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    startTime3 = System.currentTimeMillis();
    System.out.println("time here is--after-while : " +startTime3);
                   out.close();//added by rosmon to check extra time taken for extraction//
    bufferWrt.close();
    fileWriter.close();
    System.out.println("After -3");
    return extractFileName;
    catch(Exception e){
                   e.printStackTrace();
    throw new InternalServerException(e.getMessage());

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • Taking too much time (1min) to connect to database

    Hi,
    I have oracle 10.2 and 10g application server.
    Its taking too much time to connect to database through application (on browser). The connection through sqlplus is fine.
    Please share your experience.
    Regards,
    Naseer

    Dear AnaTech,
    i am going to ask not related this question which already you answered i am going to ask you about that how to connect forms6i and Developer 10g with OracleAS.
    i have installed and working Developer Suite 10g Ver. 10.1.2 and also Form Builder 6i. On my other machine i installed and working Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 and also on the database machine i installed Oracle Enterprise Manager 10g Application Server Control 10.1.2.0.2.
    my database conectivity with Developer suite Forms and Reports and also Form6i and Reports6i are working fine. no problem.
    now the 1 question of mine is that when i try to run form6i through run from web i got this error. FRM-99999: error 18121 occured see the release not.
    this and the main question of mine is this that how can i control my OracleAS 10g with forms because basically the functionality of OracleAS is Mid-Tier but i am not utilizing the Mid-tier i am using here Two-tier Envrionment even i installed 3-Tier Environment so tell me how i utilize it with 3-Tier..
    I hope you don't mind that i ask this question here and also if you give me you email then we can discuss this in detail and i can be helpful of your great expertise. i also know and utilize my 3-tier real envrionment.
    Waiting for your great response.
    Regards,
    K.J.J.C

  • BPC application is taking too much time to load

    Hi experts!
    I'm facing a very weird problem...
    We've developed a BPC application (app name: USM).
    This application is taking too much time to be loaded  in some computers (around 8 minutes to load).  Yes, in SOME computers.
    There is around 100.000 registers in the database and most coming from material master data.
    If I try to load this USM application in another computer, the process loads smoothly. The computer's hardware is all the same, the server is hyper estimated and everyone is in the same network.
    I talked to infrastructure departament and we made several tests. We run BPC on the server (loaded quickly), on several computers (some loads quickly, others don't), used wireless and cable connection (got all the same result) and checked communication between BW and BPC but it is ok.
    After all, I tried to load APSHEL application in the same enviroment and it loaded intantly. So, I guess it is something wrong with my application. But if was this, I suppose it should happen to all computers and not only with part of them.
    Have anybody ever seen something like this?
    Thank you in advance.
    Rubens
    Edited by: Rubens Massayuki Kumori on May 12, 2011 8:43 PM
    Edited by: Rubens Massayuki Kumori on May 12, 2011 8:46 PM

    Hi Rubens,
    I would try making a couple of test:
    1. I will install the client in a machine that is located in the same network segment, or try using a vpn that comunicates with the server bypassing all security devices, only to see if the network it's the problem.
    2. Making a full optimize of one application to see if maybe the problem it's related to the segmentation of the cubes (i don't think that this is the problem but give it a try).
    It is very wierd that in some computers happends and in others don't... also try to clean up the local cache of the applications in those computers that are giving to you bad performce and retry.
    hope it helps,

  • Full DTP taking too much time to load

    Hi All ,
    I am facing an issue where a DTP is taking too much time to load data from DSO to Cube via PC and also while manually running it.
    There are 6 such similar DTP's which load data for different countries(different DSO's and Cubes as source and target respectively) for last 7 days based on GI Date. All the DTP's are pulling almost same no. of records and finish within 25-30 min. But only one DTP takes around 3 hours. The problem started couple of days back.
    I have change the Parallel processes from 3->4->5 and packet size from 50,000->10,000->1.00.000 but no improvement. Also want to mention that all the source DSO's and target Cubes have the same structure. All the transformations have Field Routines and End Routines.
    Can you all please share some pointers which can help.
    Thanks
    Prateek

    HI Raman ,
    This is what I get when I check the report. Can this be causing issues as 2 rows have % >= 100
    ETVC0006           /BIC/DETVC00069     rows:      1.484    ratio:          0  %
    ETVC0006           /BIC/DETVC0006C     rows: 15.059.600    ratio:        103  %
    ETVC0006           /BIC/DETVC0006D     rows:        242    ratio:          0  %
    ETVC0006           /BIC/DETVC0006P     rows:         66    ratio:          0  %
    ETVC0006           /BIC/DETVC0006T     rows:        156    ratio:          0  %
    ETVC0006           /BIC/DETVC0006U     rows:          2    ratio:          0  %
    ETVC0006           /BIC/EETVC0006      rows: 14.680.700    ratio:        100  %
    ETVC0006           /BIC/FETVC0006      rows:          0    ratio:          0  %
    ETVC0007           rows: 13.939.200    density:              0,0  %

  • HT201250 my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?

    my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?

    Try 10.7.5 supplemental update.
    This update seems to have solved this problem for many.
    Best.

  • Code  taking too much time to output

    Following  code is taking too much time to execute . (some time giving Time_out )
    ind = sy-tabix.
        SELECT SINGLE * FROM mseg INTO mseg
           WHERE bwart = '102' AND
                 lfbnr = itab-mblnr AND
                 ebeln = itab-ebeln AND
                 ebelp = itab-ebelp.
        IF sy-subrc = 0.
          DELETE itab INDEX ind.
          CONTINUE.
    Is there any other way to write this code to reduce the output time.
    Thanks

    Hi,
    I think you are executing this code in a loop which is causing the problem. The rule is "Never put SELECT statements inside a loop".
    Try to rewrite the code as follows:
    * Outside the loop
    SELECT *
    from MSEG
    into table lt_mseg
    for all entries in itab
    where bwart = '102' AND
    lfbnr = itab-mblnr AND
    ebeln = itab-ebeln AND
    ebelp = itab-ebelp.
    Then inside the loop, do a READ on the internal table
    Loop at itab.
    read table lt_mseg with key bwart = '102'. "plus other conditions
    if sy-subrc ne 0.
    delete itab. "index is automatically determined here from SY-TABIX
    endif.
    endloop.
    I think this should optimise performance. You can check your code's performance using SE30 or ST05.
    Hope this helps! Please revert if you need anything else!!
    Cheers,
    Shailesh.
    Always provide feedback for helpful answers!

  • Procedure Taking too Much Time

    Hi Guys,
    I have package which contains number of function and procedure which is link with Oracle 's standard report called Prodction Shortage Report , it is taking too much time and even now days it is not finsih after three days.
    So I am looking for some tuning steps which I can follow for all the queries one by one, I tried touse Explain Plan what it is showing not too much difference after movement of column from one location to another etc , even I have updated the existing indexes and also created new indexes.
    Thanks
    Shishu Paul

    Shishu,
    Is this report part of an Oracle product such as the Enterprise Business Suite (EBS)? If so, I recommend you create a Service Request (SR) with Oracle and have them tune the report. Doing your own tuning could cause you some support problems later on.
    Just my two cents...
    Craig

  • ODS to CUBE load taking too much time..

    Hi all ,
    we are loading the data from our ZODS to ZCUBE, but the data load is taking too much time , we haven't created any indexes , we alsotried by making infosource for the ODS but still tha same problem .. It is always showing 0 from 345674 records that is the records are not getting extracted from ODS .
    Can anybody help me in this regards , it is a bit urgent ..
    Thanks in advance.

    Hi,
    there are a few you can check. First you should check if this job hasn't ended in a dump with ST22.
    The next thing you can do, if the job doesn't end abnormaly, is to reduce the amount of records processed at the same time. Sometimes the system has trouble if the amount of records that it has to process is too large. Go to the InfoPackage -> DataS. Default Data Transfer -> Fill the maximum to 10% of de the default value. Try to run the load again.
    If the job still doesn't finish then you have to check wether or not there are any ABAP routines and/or formula involved in the update rule ? Maybe they running in a loop.
    regards,
    Raymond Baggen
    Uphantis bv

Maybe you are looking for