SMQ1 deletion problem too much time larger queue

Hi Experts
We have to delete Queues from SMQ1, the problem is the data volume is huge in queues and system is taking too much time to delete these queues.
To delete 122000 records it took 4 hours to delete the queue from SMQ1.
Problem is we have to delete total of  51,48,000 records in total and one queue has huge data that is 26,10,000.
Is there any solution to fasten the process of deletion or any work around SMQ1 to delete the queues faster.
We are deleting queues in live system I mean the production system is up and running.
Thanks
Navneet

Hi Navnet,
Use the report RSTRFCQDS to delete the particular queue of SMQ1.
The report should check the inconsistencies between the tables:
TRFCQOUT, ARFCSSTATE, QREFTID and ARFCSDATA, therefore it will take time.
The fastest way to delete them is: you use a sql tool to delete all the entries in TRFCQOUT, ARFCSSATE, QREFTID and ARFCSDATA. In this case, you also delete the entries in SM58 (tRFC).
See also SAP Note 760113.
Rgds,
Colum

Similar Messages

  • FP-2000 FP Read.vi problem : too much time to call

    Hi,
    Since I installed LabVIEW 8.0 and all the latest softwares for Real-Time targets, I have a problem with my FieldPoint FP-2000 when I try to read an I/O on the Fieldpoint :
    On the project, when I launch my VI on my computer, the FP Read.vi or FP write.vi executes instantly, so no problem.
    But if I launch the exactly same VI on the RT Target, it deploys correctly, but takes about 5-10 seconds to execute EACH call to the FP VIs. It means, if I want to read for example 10 inputs on the Fieldpoint, it will take more than 1 minute! The VI hangs until the FP read executes, but I don't have any error message.
    I discovered that it was the DLL call that takes a lot of time to execute. If I launch a VI without a DLL call, it will execute instantly.
    I verified, the softwares on my FieldPoint target are exactly the same that on my host PC...
    I hope somebody can help me, I'm very annoyed !

    Hi Thomas,
    Yes it's only the first call that's take time. And yes I can call all my I/O after the boot, but it takes a lot of time (I have 34 I/O to call, with 5-10 seconds for each).
    But for the moment it's the only workaround...
    Message Edité par sebk1 le 03-01-2006 06:18 AM

  • Archive Delete job taking too much time - STXH Sequential Read

    Hello,
    We have been running Archive sessions in our production system in last couple of months. We use SARA and selecting the appropriate variants for WRITE, DELETE and STORAGE options.
    Currently we use the Archive object FI_DOCUMNT and the write job is finished as per the normal time (5 hrs based on the selection criteria). After that normally the delete job is used to complete in 1hr or less than 2hrs always (in last 3 months).
    But in last few days the delete job is taking too much to complete (around 8 - 10hrs) when I monitor the system found that the Sequential Read for table STXH is taking too much time to read and it seems this is the cause.
    Could you please provide a solution for the same, so that the job will run faster as earlier.
    Thanks for your time
    Shyl

    Hi Juan,
    After the statistics run the performance is quite good. Now the job getting finished as expected.
    Thanks. Problem solved
    Shyl

  • Delete query taking too much time

    Hi All,
    my delete query is taking too much time. around 1hr 30 min for 1.5 lac records.
    Though I have dropped mv log on the table & disabled all the triggers on it.
    Moreover deletion is based on primary key .
    delete from table_name where primary_key in (values)
    above is dummy format of my query.
    can anyone please tell me what could be other reason that query is performing that slow.
    Is there anything to check in DB other than triggers,mv log,constraints in order to improve the performance?
    Please reply asap.

    Delete is the most time consuming operation, as the whole record data has to be stored at the undo segments. On the other hand, there is a part of the query used to select records to delete that probably is adding an extra overhead to the process, the in (values) clause. It would be nice on your side to post another dummy from this (values) clause. I could figure out this is a subquery, and in order for you to obtain this list you have to run a inefficient query.
    You can gather the execution plan so you can see where the most heavy part of th query is. This way a better tuning approach and a more accurate diagnostic can be issued.
    ~ Madrid.

  • Large records retrieve from network took too much time.How can I improve it

    HI All
    I have Oracle server 10g, and I have tried to fetch around 200 thousand (2 lack ) records. I have used Servlet that is deploy into Jboss 4.0.
    And this records come from network.
    I have used simple rs.next_ method but it took too much time. I have got the only 30 records with in 1 sec. So if I want all these 2 lacks records system take around more than 40 min. And my requirement is that it has to be retrieve within 40 min.
    Is there any another way around this problem? Is there any way that at one call Result set get 1000 records at one time?
    As I read somewhere that “ If we use a normal ResultSet data isn't retrieved until you do the next call. The ResultSet isn't a memory table, it's a cursor into the database which loads the next row on request (though the drivers are at liberty to anticipate the request). “
    So if we can pass the a request to around 1000 records at one call then maybe we can reduce time.
    Has anyone idea How to improve this problem?
    Regards,
    Shailendra Soni

    That true...
    I have solved my problem invokeing setFetchSize on ResultSet object.
    like ResultSet.setFetchSize(1000).
    But The problem sorted out for the less than 1 lack records. Still I want to do the testing for more than 1 lack records.
    Actually I had read a one nice article on net
    [http://www.precisejava.com/javaperf/j2ee/JDBC.htm#JDBC114]
    They have written a solutions for such type of the problem but they dont give any examples. Without examples i dont find how to resolve this type of the problem.
    They gave two solutions i,e
    Fetch small amount of data iteratively instead of fetching whole data at once
    Applications generally require to retrieve huge data from the database using JDBC in operations like searching data. If the client request for a search, the application might return the whole result set at once. This process takes lot of time and has an impact on performance. The solution for the problem is
    1.     Cache the search data at the server-side and return the data iteratively to the client. For example, the search returns 1000 records, return data to the client in 10 iterations where each iteration has 100 records.
    // But i don't understand How can I do it in java.
    2. Use Stored procedures to return data iteratively. This does not use server-side caching rather server-side application uses Stored procedures to return small amount of data iteratively.
    // But i don't understand How can I do it in java.
    If you know any one of these solutions then can you please give me examples to do it.
    Thanks in Advance,
    Shailendra

  • Hello guys my i phone touch is not working on edges and some touch problem and it is not taking too much time too respond it is purshecd in kuwait can i get replace in india

    hello guys my i phone touch is not working on edges and some touch problem and it is not taking too much time too respond it is purshecd in kuwait can i get replace in india it is i phone 4s

    saikumar3 wrote:
    ...it is purshecd in kuwait can i get replace in india it is i phone 4s
    No, warranty is not International and is only valid in country of purchase.

  • Auto Invoice Program taking too much time : problem with update sql

    Hi ,
    Oracle db version 11.2.0.3
    Oracle EBS version : 12.1.3
    Though we have a SEV-1 SR with oracle we have not been able to find much success.
    We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it.  I am attaching the explain plan for the for same.  Its an update query. Please guide.
    Plan
    UPDATE STATEMENT  ALL_ROWSCost: 0  Bytes: 124  Cardinality: 1 
      50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
      27 FILTER 
      26 HASH JOIN  Cost: 8,937,633  Bytes: 4,261,258,760  Cardinality: 34,364,990 
      24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413  Bytes: 446,744,870  Cardinality: 34,364,990 
      23 SORT UNIQUE  Cost: 8,618,413  Bytes: 4,042,339,978  Cardinality: 34,364,990 
      22 UNION-ALL 
      9 FILTER 
      8 SORT GROUP BY  Cost: 5,643,052  Bytes: 3,164,892,625  Cardinality: 25,319,141 
      7 HASH JOIN  Cost: 1,640,602  Bytes: 32,460,436,875  Cardinality: 259,683,495 
      1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993  Bytes: 402,499,500  Cardinality: 20,124,975 
      6 HASH JOIN  Cost: 853,567  Bytes: 22,544,143,440  Cardinality: 214,706,128 
      4 HASH JOIN  Cost: 536,708  Bytes: 2,357,000,550  Cardinality: 29,835,450 
      2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008  Bytes: 1,163,582,550  Cardinality: 29,835,450 
      3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314  Bytes: 1,193,526,000  Cardinality: 29,838,150 
      5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951  Bytes: 3,123,197,116  Cardinality: 120,122,966 
      21 FILTER 
      20 SORT GROUP BY  Cost: 2,975,360  Bytes: 877,447,353  Cardinality: 9,045,849 
      19 HASH JOIN  Cost: 998,323  Bytes: 17,548,946,769  Cardinality: 180,916,977 
      13 VIEW VIEW AR.index$_join$_027 Cost: 108,438  Bytes: 867,771,256  Cardinality: 78,888,296 
      12 HASH JOIN 
      10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206  Bytes: 867,771,256  Cardinality: 78,888,296 
      11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322  Bytes: 867,771,256  Cardinality: 78,888,296 
      18 HASH JOIN  Cost: 748,497  Bytes: 3,281,713,302  Cardinality: 38,159,457 
      14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993  Bytes: 402,499,500  Cardinality: 20,124,975 
      17 HASH JOIN  Cost: 519,713  Bytes: 1,969,317,900  Cardinality: 29,838,150 
      15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822  Bytes: 716,115,600  Cardinality: 29,838,150 
      16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847  Bytes: 1,253,202,300  Cardinality: 29,838,150 
      25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552  Bytes: 5,158,998,615  Cardinality: 46,477,465 
      41 SORT GROUP BY  Bytes: 75  Cardinality: 1 
      40 FILTER 
      39 MERGE JOIN CARTESIAN  Cost: 11  Bytes: 75  Cardinality: 1 
      35 NESTED LOOPS  Cost: 8  Bytes: 50  Cardinality: 1 
      32 NESTED LOOPS  Cost: 5  Bytes: 30  Cardinality: 1 
      29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3  Bytes: 22  Cardinality: 1 
      28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2  Cardinality: 1 
      31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2  Bytes: 133,114,520  Cardinality: 16,639,315 
      30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1  Cardinality: 1 
      34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 20  Cardinality: 1 
      33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2  Cardinality: 1 
      38 BUFFER SORT  Cost: 9  Bytes: 25  Cardinality: 1 
      37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 25  Cardinality: 1 
      36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2  Cardinality: 1 
      49 SORT GROUP BY  Bytes: 48  Cardinality: 1 
      48 FILTER 
      47 NESTED LOOPS 
      45 NESTED LOOPS  Cost: 7  Bytes: 48  Cardinality: 1 
      43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4  Bytes: 20  Cardinality: 1 
      42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3  Cardinality: 1 
      44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2  Cardinality: 1 
      46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 28  Cardinality: 1 
    As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
    Regards

    Hi Paul, My bad. I am sorry I missed it.
    Query as below :
    UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
    I understand that this could be a seeded query but my attempt is to tune it.
    Regards

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • SELECT query takes too much time! Y?

    Plz find my SELECT query below:
    select w~mandt
    wvbeln wposnr wmeins wmatnr wwerks wnetwr
    wkwmeng wvrkme wmatwa wcharg w~pstyv
    wposar wprodh wgrkor wantlf wkztlf wlprio
    wvstel wroute wumvkz wumvkn wabgru wuntto
    wawahr werdat werzet wfixmg wprctr wvpmat
    wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
    wbedae wcuobj w~mtvfp
    xetenr xwmeng xbmeng xettyp xwepos xabart
    x~edatu
    xtddat xmbdat xlddat xwadat xabruf xetart
    x~ezeit
    into table t_vbap
    from vbap as w
    inner join vbep as x on xvbeln = wvbeln and
    xposnr = wposnr and
    xmandt = wmandt
    where
    ( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
    ( ( ( erdat > pre_dat and erdat < p_syndt ) or
    ( erdat = p_syndt and erzet <= p_syntm ) ) ) and
    w~matnr in s_matnr and
    w~pstyv in s_itmcat and
    w~lfrel in s_lfrel and
    w~abgru = ' ' and
    w~kwmeng > 0 and
    w~mtvfp in w_mtvfp and
    x~ettyp in w_ettyp and
    x~bdart in s_req_tp and
    x~plart in s_pln_tp and
    x~etart in s_etart and
    x~abart in s_abart and
    ( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
    The problem: It takes too much time while executing this statement.
    Could anybody change this statement and help me out to reduce the DB Access time?
    Thx

    Ways of Performance Tuning
    1.     Selection Criteria
    2.     Select Statements
    •     Select Queries
    •     SQL Interface
    •     Aggregate Functions
    •     For all Entries
    Select Over more than one internal table
    Selection Criteria
    1.     Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement. 
    2.     Select with selection list.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    Select Statements   Select Queries
    1.     Avoid nested selects
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
    2.     Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
    SELECT * FROM SBOOK INTO SBOOK_WA.
      CHECK: SBOOK_WA-CARRID = 'LH' AND
             SBOOK_WA-CONNID = '0400'.
    ENDSELECT.
    The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
    SELECT  CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
      WHERE SBOOK_WA-CARRID = 'LH' AND
                  SBOOK_WA-CONNID = '0400'.
    3.     When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
    To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
    4.     For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit. 
    SELECT * FROM SBOOK INTO SBOOK_WA
      UP TO 1 ROWS
      WHERE CARRID = 'LH'.
    ENDSELECT.
    The above code is more optimized as compared to the code mentioned below for testing existence of a record.
    SELECT * FROM SBOOK INTO SBOOK_WA
        WHERE CARRID = 'LH'.
      EXIT.
    ENDSELECT.
    5.     Use Select Single if all primary key fields are supplied in the Where condition .
    If all primary key fields are supplied in the Where conditions you can even use Select Single.
    Select Single requires one communication with the database system, whereas Select-Endselect needs two.
    Select Statements SQL Interface
    1.     Use column updates instead of single-row updates
    to update your database tables.
    SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
      SFLIGHT_WA-SEATSOCC =
        SFLIGHT_WA-SEATSOCC - 1.
      UPDATE SFLIGHT FROM SFLIGHT_WA.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    UPDATE SFLIGHT
           SET SEATSOCC = SEATSOCC - 1.
    2.     For all frequently used Select statements, try to use an index.
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    The above mentioned code can be more optimized by using the following code
    SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
      WHERE MANDT IN ( SELECT MANDT FROM T000 )
        AND CARRID = 'LH'
        AND CONNID = '0400'.
    ENDSELECT.
    3.     Using buffered tables improves the performance considerably.
    Bypassing the buffer increases the network considerably
    SELECT SINGLE * FROM T100 INTO T100_WA
      BYPASSING BUFFER
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    The above mentioned code can be more optimized by using the following code
    SELECT SINGLE * FROM T100  INTO T100_WA
      WHERE     SPRSL = 'D'
            AND ARBGB = '00'
            AND MSGNR = '999'.
    Select Statements  Aggregate Functions
    •     If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
    Some of the Aggregate functions allowed in SAP are  MAX, MIN, AVG, SUM, COUNT, COUNT( * )
    Consider the following extract.
                Maxno = 0.
                Select * from zflight where airln = ‘LF’ and cntry = ‘IN’.
                 Check zflight-fligh > maxno.
                 Maxno = zflight-fligh.
                Endselect.
    The  above mentioned code can be much more optimized by using the following code.
    Select max( fligh ) from zflight into maxno where airln = ‘LF’ and cntry = ‘IN’.
    Select Statements  For All Entries
    •     The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
         The plus
    •     Large amount of data
    •     Mixing processing and reading of data
    •     Fast internal reprocessing of data
    •     Fast
         The Minus
    •     Difficult to program/understand
    •     Memory could be critical (use FREE or PACKAGE size)
    Points to be must considered FOR ALL ENTRIES
    •     Check that data is present in the driver table
    •     Sorting the driver table
    •     Removing duplicates from the driver table
    Consider the following piece of extract
              Loop at int_cntry.
      Select single * from zfligh into int_fligh
      where cntry = int_cntry-cntry.
      Append int_fligh.
                          Endloop.
    The above mentioned can be more optimized by using the following code.
    Sort int_cntry by cntry.
    Delete adjacent duplicates from int_cntry.
    If NOT int_cntry[] is INITIAL.
                Select * from zfligh appending table int_fligh
                For all entries in int_cntry
                Where cntry = int_cntry-cntry.
    Endif.
    Select Statements Select Over more than one Internal table
    1.     Its better to use a views instead of nested Select statements.
    SELECT * FROM DD01L INTO DD01L_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND AS4LOCAL = 'A'.
      SELECT SINGLE * FROM DD01T INTO DD01T_WA
        WHERE   DOMNAME    = DD01L_WA-DOMNAME
            AND AS4LOCAL   = 'A'
            AND AS4VERS    = DD01L_WA-AS4VERS
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT.
    The above code can be more optimized by extracting all the data from view DD01V_WA
    SELECT * FROM DD01V INTO  DD01V_WA
      WHERE DOMNAME LIKE 'CHAR%'
            AND DDLANGUAGE = SY-LANGU.
    ENDSELECT
    2.     To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
    SELECT * FROM EKKO INTO EKKO_WA.
      SELECT * FROM EKAN INTO EKAN_WA
          WHERE EBELN = EKKO_WA-EBELN.
      ENDSELECT.
    ENDSELECT.
    The above code can be much more optimized by the code written below.
    SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
        FROM EKKO AS P INNER JOIN EKAN AS F
          ON PEBELN = FEBELN.
    3.     Instead of using nested Select loops it is often better to use subqueries.
    SELECT * FROM SPFLI
      INTO TABLE T_SPFLI
      WHERE CITYFROM = 'FRANKFURT'
        AND CITYTO = 'NEW YORK'.
    SELECT * FROM SFLIGHT AS F
        INTO SFLIGHT_WA
        FOR ALL ENTRIES IN T_SPFLI
        WHERE SEATSOCC < F~SEATSMAX
          AND CARRID = T_SPFLI-CARRID
          AND CONNID = T_SPFLI-CONNID
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    The above mentioned code can be even more optimized by using subqueries instead of for all entries.
    SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
        WHERE SEATSOCC < F~SEATSMAX
          AND EXISTS ( SELECT * FROM SPFLI
                         WHERE CARRID = F~CARRID
                           AND CONNID = F~CONNID
                           AND CITYFROM = 'FRANKFURT'
                           AND CITYTO = 'NEW YORK' )
          AND FLDATE BETWEEN '19990101' AND '19990331'.
    ENDSELECT.
    1.     Table operations should be done using explicit work areas rather than via header lines.
    READ TABLE ITAB INTO WA WITH KEY K = 'X‘ BINARY SEARCH.
    IS MUCH FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY K = 'X'.
    If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
    2.     Always try to use binary search instead of linear search. But don’t forget to sort your internal table before that.
    READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
    READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
    3.     A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
    4.     A binary search using secondary index takes considerably less time.
    5.     LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
    LOOP AT ITAB INTO WA WHERE K = 'X'.
    ENDLOOP.
    The above code is much faster than using
    LOOP AT ITAB INTO WA.
      CHECK WA-K = 'X'.
    ENDLOOP.
    6.     Modifying selected components using “ MODIFY itab …TRANSPORTING f1 f2.. “ accelerates the task of updating  a line of an internal table.
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
    The above code is more optimized as compared to
    WA-DATE = SY-DATUM.
    MODIFY ITAB FROM WA INDEX 1.
    7.     Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
    Modifying selected components only makes the program faster as compared to Modifying all lines completely.
    e.g,
    LOOP AT ITAB ASSIGNING <WA>.
      I = SY-TABIX MOD 2.
      IF I = 0.
        <WA>-FLAG = 'X'.
      ENDIF.
    ENDLOOP.
    The above code works faster as compared to
    LOOP AT ITAB INTO WA.
      I = SY-TABIX MOD 2.
      IF I = 0.
        WA-FLAG = 'X'.
        MODIFY ITAB FROM WA.
      ENDIF.
    ENDLOOP.
    8.    If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
    LOOP AT ITAB1 INTO WA1.
      READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
      IF SY-SUBRC = 0.
        ADD: WA1-VAL1 TO WA2-VAL1,
             WA1-VAL2 TO WA2-VAL2.
        MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
      ELSE.
        INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
      ENDIF.
    ENDLOOP.
    The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
    LOOP AT ITAB1 INTO WA.
      COLLECT WA INTO ITAB2.
    ENDLOOP.
    SORT ITAB2 BY K.
    COLLECT, however, uses a hash algorithm and is therefore independent
    of the number of entries (i.e. O(1)) .
    9.    "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to “ LOOP-APPEND-ENDLOOP.”
    APPEND LINES OF ITAB1 TO ITAB2.
    This is more optimized as compared to
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    10.   “DELETE ADJACENT DUPLICATES“ accelerates the task of deleting duplicate entries considerably as compared to “ READ-LOOP-DELETE-ENDLOOP”.
    DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
    This is much more optimized as compared to
    READ TABLE ITAB INDEX 1 INTO PREV_LINE.
    LOOP AT ITAB FROM 2 INTO WA.
      IF WA = PREV_LINE.
        DELETE ITAB.
      ELSE.
        PREV_LINE = WA.
      ENDIF.
    ENDLOOP.
    11.   "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to “  DO -DELETE-ENDDO”.
    DELETE ITAB FROM 450 TO 550.
    This is much more optimized as compared to
    DO 101 TIMES.
      DELETE ITAB INDEX 450.
    ENDDO.
    12.   Copying internal tables by using “ITAB2[ ] = ITAB1[ ]” as compared to “LOOP-APPEND-ENDLOOP”.
    ITAB2[] = ITAB1[].
    This is much more optimized as compared to
    REFRESH ITAB2.
    LOOP AT ITAB1 INTO WA.
      APPEND WA TO ITAB2.
    ENDLOOP.
    13.   Specify the sort key as restrictively as possible to run the program faster.
    “SORT ITAB BY K.” makes the program runs faster as compared to “SORT ITAB.”
    Internal Tables         contd…
    Hashed and Sorted tables
    1.     For single read access hashed tables are more optimized as compared to sorted tables.
    2.      For partial sequential access sorted tables are more optimized as compared to hashed tables
    Hashed And Sorted Tables
    Point # 1
    Consider the following example where HTAB is a hashed table and STAB is a sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    This runs faster for single read access as compared to the following same code for sorted table
    DO 250 TIMES.
      N = 4 * SY-INDEX.
      READ TABLE STAB INTO WA WITH TABLE KEY K = N.
      IF SY-SUBRC = 0.
      ENDIF.
    ENDDO.
    Point # 2
    Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
    LOOP AT STAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.
    This runs faster as compared to
    LOOP AT HTAB INTO WA WHERE K = SUBKEY.
    ENDLOOP.

  • Importing is taking too much time (2 DAYS)

    Dear All,
    I'm Importing below support packages together in a queue @ SAP Solution manger 4  .
    SAPKB70015             Basis Support Package 15 for 7.00
    SAPKA70015             ABA Support Package 15 for 7.00
    SAPKITL426             ST 400: Patch 0016, CRT for SAPKB70015
    SAPKIBIIP6             BI_CONT 703: patch 0006
    SAPKIBIIP7             BI_CONT 703: patch 0007
    SAPKIBIIP8             BI_CONT 703: patch 0008
    SAPK-40010INCPRXRPM    CPRXRPM 400: patch 0010
    SAPK-40011INCPRXRPM    CPRXRPM 400: patch 0011
    SAPK-40012INCPRXRPM    CPRXRPM 400: patch 0012
    SAPKIPYJ7E             PI_BASIS 2005_1_700: patch 0014
    SAPKW70016             BW Support Package 16 for 7.00
    importing is taking too much time (2 DAYS) in main import phase, I have seen SLOG, there are many rows " I am waiting 1 sec and 6 sec . and also checked transaction code STMS all support packages  imported except  one support package "SAPKW70016  "
    Please advice.
    Best Regards'
    HE

    Hello Mohan,
    The DBTABLOG table does get large, the best is to switch off logging. If that's not possible, increase the frequency of your delete job, also explore one more alternative have a look at the archival object: BC_DBLOGS, you could archive old records (in accordance with your customer's data retention policies) to reduce the size of the table.
    Also, have a look at the following notes, they will advise you on how to improve the performance of your delete job:
    Note 531923 - Audit Trail: Indexes on table DBTABLOG
    Note 579980 - Table logs: Performance during access to DBTABLOG
    Regards,
    Siddhesh

  • OBIEE Consistency Check is taking too much time

    Hi,
    When I run consistency check globally, it takes too much time. Itt  is waiting after completing about %80 of it. I can't click on it, the application is freezing. How can I recover it? What can be the cause of this problem?
    Thank you for your attention,
    Regards

    Hello Mohan,
    The DBTABLOG table does get large, the best is to switch off logging. If that's not possible, increase the frequency of your delete job, also explore one more alternative have a look at the archival object: BC_DBLOGS, you could archive old records (in accordance with your customer's data retention policies) to reduce the size of the table.
    Also, have a look at the following notes, they will advise you on how to improve the performance of your delete job:
    Note 531923 - Audit Trail: Indexes on table DBTABLOG
    Note 579980 - Table logs: Performance during access to DBTABLOG
    Regards,
    Siddhesh

  • Taking too much time in Rules(DTP Schedule run)

    Hi,
    I am Scheduling the DTP which have filters to minimize the load data.
    when i run the DTP it is taking too much time in the "rules" (i can see the  DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
    here it is consuming too much time in Rules Mapping.
    what is the problem and any solutions please...
    regards,
    sree

    Hi,
    Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
    Also check ur DTP batch settings, ie how many no. of background processes used to perform  DTP, Job class.
    U can find these :
    goto DTP, select goto menu and select "Settings for Batch Manager".
    In the screen increase no of Processes from 3 to higher no(max 9).
    ChaNGE job class to 'A'.
    If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
    Change these settings and run ur DTP one more time.
    U can observer the difference.
    Reddy

  • [Solved] Dolphin taking too much time to respond

    Dolphin, while used by an ordinary user, takes too much time to respond, especially when firefox is running. The time it takes to initialize is also very lengthy. At times, it does not even open up and even if active, it hangs for quite sometime before being responsive again while trying to select a file or two.
    At the same time, it works rather faster when called as super user through command line using $sudo dolphin
    But then, the console displays lot of error messages as follows.
    sebinaj ~
    $ sudo dolphin
    Error: "/var/tmp/kdecache-sebinaj1Jz46I" is owned by uid 1002 instead of uid 0.
    Error: "/tmp/kde-sebinaj" is owned by uid 1002 instead of uid 0.
    sebinaj ~
    $ Error: "/tmp/ksocket-sebinaj" is owned by uid 1002 instead of uid 0.
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
    Error: alias title requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#title, http://www.semanticdesktop.org/ontologies/2007/03/22/nco#title
    Error: alias comment requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#comment, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#comment
    Error: alias count requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#count, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#count
    Error: alias created requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#created, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#created
    Error: alias description requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#description, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#description
    Error: alias duration requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#duration, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#duration
    Error: alias encoding requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#encoding, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#encoding
    Error: alias role requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#role, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#role
    Error: alias url requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#url, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#url
    Error: alias version requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#version, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#version
    Error: alias bitsPerSample requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#bitsPerSample, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#bitsPerSample
    Error: alias copyright requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#copyright, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#copyright
    Error: alias date requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#date, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#date
    Error: alias dateTime requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#dateTime, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#dateTime
    Error: alias geo requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#geo, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#geo
    Error: alias height requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#height, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#height
    Error: alias width requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#width, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#width
    Error: alias date requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#date, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#date
    Error: alias fileOwner requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#fileOwner, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#fileOwner
    Error: alias language requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#language, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#language
    Error: alias length requested by several properties: http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#length, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#length
    Error: alias publisher requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#publisher, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#publisher
    Error: alias title requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#title, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#title
    Error: alias contributor requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#contributor, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#contributor
    Error: alias created requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#created, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#created
    Error: alias creator requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#creator, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#creator
    Error: alias description requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#description, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#description
    Error: alias identifier requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#identifier, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#identifier
    Error: alias lastModified requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#lastModified, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#lastModified
    Error: alias version requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#version, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#version
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#fileExtension' is not defined in any rdfs ontology database.
    WARNING: field 'http://strigi.sf.net/ontologies/0.9#debugParseError' is not defined in any rdfs ontology database.
    /usr/lib/strigi/strigiea_ics.so
    /usr/lib/strigi/strigiea_jpeg.so
    /usr/lib/strigi/strigiea_vcf.so
    /usr/lib/strigi/strigila_cpp.so
    /usr/lib/strigi/strigila_deb.so
    /usr/lib/strigi/strigila_diff.so
    /usr/lib/strigi/strigila_mobi.so
    /usr/lib/strigi/strigila_namespaceharvester.so
    /usr/lib/strigi/strigila_po.so
    /usr/lib/strigi/strigila_txt.so
    /usr/lib/strigi/strigila_xpm.so
    /usr/lib/strigi/strigita_au.so
    /usr/lib/strigi/strigita_audible.so
    /usr/lib/strigi/strigita_avi.so
    /usr/lib/strigi/strigita_dds.so
    /usr/lib/strigi/strigita_dvi.so
    /usr/lib/strigi/strigita_font.so
    /usr/lib/strigi/strigita_gif.so
    /usr/lib/strigi/strigita_ico.so
    /usr/lib/strigi/strigita_mp4.so
    /usr/lib/strigi/strigita_pcx.so
    /usr/lib/strigi/strigita_rgb.so
    /usr/lib/strigi/strigita_sid.so
    /usr/lib/strigi/strigita_ts.so
    /usr/lib/strigi/strigita_wav.so
    /usr/lib/strigi/strigita_xbm.so
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#usesNamespace' is not defined in any rdfs ontology database.
    WARNING: field 'translation.total' is not defined in any rdfs ontology database.
    WARNING: field 'translation.translated' is not defined in any rdfs ontology database.
    WARNING: field 'translation.untranslated' is not defined in any rdfs ontology database.
    WARNING: field 'translation.obsolete' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.modify_file_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.first_modify_file' is not defined in any rdfs ontology database.
    WARNING: field 'content.format_subtype' is not defined in any rdfs ontology database.
    WARNING: field 'content.generator' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.hunk_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.insert_line_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.modify_line_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.delete_line_count' is not defined in any rdfs ontology database.
    WARNING: field 'translation.fuzzy' is not defined in any rdfs ontology database.
    WARNING: field 'translation.last_translator' is not defined in any rdfs ontology database.
    WARNING: field 'translation.translation_date' is not defined in any rdfs ontology database.
    WARNING: field 'translation.source_date' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#colorCount' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#formatSubtype' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nfo#bitsPerSample' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#audioSampleDataType' is not defined in any rdfs ontology database.
    WARNING: field 'content.mime_type' is not defined in any rdfs ontology database.
    WARNING: field 'audio.title' is not defined in any rdfs ontology database.
    WARNING: field 'audio.artist' is not defined in any rdfs ontology database.
    WARNING: field 'todo.audio.narrator' is not defined in any rdfs ontology database.
    WARNING: field 'media.codec' is not defined in any rdfs ontology database.
    WARNING: field 'todo.audible.user_id' is not defined in any rdfs ontology database.
    WARNING: field 'todo.audible.user_alias' is not defined in any rdfs ontology database.
    WARNING: field 'audio.duration' is not defined in any rdfs ontology database.
    WARNING: field 'content.description' is not defined in any rdfs ontology database.
    WARNING: field 'content.copyright' is not defined in any rdfs ontology database.
    WARNING: field 'content.keyword' is not defined in any rdfs ontology database.
    WARNING: field 'content.creation_time' is not defined in any rdfs ontology database.
    WARNING: field 'content.maintainer' is not defined in any rdfs ontology database.
    WARNING: field 'content.ID' is not defined in any rdfs ontology database.
    WARNING: field 'audio.channel_count' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nfo#colorDepth' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#colorSpace' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#compressionAlgorithm' is not defined in any rdfs ontology database.
    WARNING: field 'font.family' is not defined in any rdfs ontology database.
    WARNING: field 'font.weight' is not defined in any rdfs ontology database.
    WARNING: field 'font.slant' is not defined in any rdfs ontology database.
    WARNING: field 'font.width' is not defined in any rdfs ontology database.
    WARNING: field 'font.spacing' is not defined in any rdfs ontology database.
    WARNING: field 'font.foundry' is not defined in any rdfs ontology database.
    WARNING: field 'content.version' is not defined in any rdfs ontology database.
    WARNING: field 'content.genre' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_trackNumber' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_discNumber' is not defined in any rdfs ontology database.
    WARNING: field 'content.author' is not defined in any rdfs ontology database.
    WARNING: field 'content.comment' is not defined in any rdfs ontology database.
    WARNING: field 'audio.album' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_audio.albumartist' is not defined in any rdfs ontology database.
    WARNING: field 'content.links' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_content.purchaser' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_content.purchasedate' is not defined in any rdfs ontology database.
    WARNING: field 'media.duration' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_video.duration' is not defined in any rdfs ontology database.
    WARNING: field 'av.audio_codec' is not defined in any rdfs ontology database.
    WARNING: field 'av.video_codec' is not defined in any rdfs ontology database.
    WARNING: field 'content.thumbnail' is not defined in any rdfs ontology database.
    WARNING: field 'user.rating' is not defined in any rdfs ontology database.
    WARNING: field 'image.width' is not defined in any rdfs ontology database.
    WARNING: field 'image.height' is not defined in any rdfs ontology database.
    WARNING: field 'media.sample_rate' is not defined in any rdfs ontology database.
    WARNING: field 'media.sample_format' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#artist' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#albumTrackCount' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#musicAlbum' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#genre' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#composer' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#trackNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#setNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#performer' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#internationalStandardRecordingCode' is not defined in any rdfs ontology database.
    WARNING: field 'Product Id' is not defined in any rdfs ontology database.
    WARNING: field 'Events' is not defined in any rdfs ontology database.
    WARNING: field 'Journals' is not defined in any rdfs ontology database.
    WARNING: field 'Todos' is not defined in any rdfs ontology database.
    WARNING: field 'Todos Completed' is not defined in any rdfs ontology database.
    WARNING: field 'Todos Overdue' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#ccdWidth' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#focusDistance' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#targetQuality' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#givenName' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#familyName' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#emailAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homepageContactURL' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#contentComment' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#cellPhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homePhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#workPhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#faxPhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#phoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homePostalAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#workPostalAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#postalAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#honorificPrefix' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#honorificSuffix' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#subject' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#title' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#author' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#description' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#copyright' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#isContentEncrypted' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#contentKeyword' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#paragraphCount' is not defined in any rdfs ontology database.
    WARNING: field 'http://rdf.openmolecules.net/0.9#moleculeCount' is not defined in any rdfs ontology database.
    kDebugStream called after destruction (from void KDirWatchPrivate::removeEntry(KDirWatch*, KDirWatchPrivate::Entry*, KDirWatchPrivate::Entry*) file /home/phil/kdemod/core/kdelibs/src/kdelibs-4.3.3/kio/kio/kdirwatch.cpp line 901)
    Cancelled INotify (fd 9, 1) for "/home/sebinaj/.local/share"
    ^C
    I am using KDEmod + Arch
    Last edited by absolutevoid (2009-11-05 17:23:59)

    There's a large thread around about this Dolphin problem.
    Disabling Nepomuk in System Settings has proved to be the
    cure in many cases.
    Deej

  • IPad Air too much time to open a link from Mail

    SInce the last update to iOS 8.02 my iPad Air takes too much time to open a web link from the Mail App, Safari open itself with a long delay.
    furthermore The attachments into email are often not readable, have to reload the message to be able to open the attachment.
    LAst but not least, I have too many messages not downloaded from the iMap server and I can read just the title of it.

    There's a large thread around about this Dolphin problem.
    Disabling Nepomuk in System Settings has proved to be the
    cure in many cases.
    Deej

  • ROWSET taking too much time to populate

    Hi ,
    I have a problem with ROWSET.
    I have a table with 6 columns and approximately more than 200 records.
    I reterive them in a ResultSet and resultset gets populated.
    But when I populate Rowset with that ResultSet then it hangs for atleast 1 minuate and then it comes back to its original state.
    Can anyone have the idea why it happens....?
    Is there any issue with the Rowset ?
    Why it takes too much time to populate the rowset..?
    CachedRowSetImpl crs = new CachedRowSetImpl();
    Statement stmt = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY);
    ResultSet result = stmt.executeQuery(sql); // Here we have 250 records in ResultSet
    crs.populate(result); // This is the problem. This line takes atleast 1min to executeThanks
    Tariq

    I think this Java Forum is not the right FORUM for
    me.....I think you are right. This is NOT the FORUM for YOU.
    I think u (So Called GURUS) all need to work on JDBC
    more and on large application so that u should be
    able to solve complex problem.
    THIS is PROBABLY not the BEST way for YOU to get HELP
    I hope u will work on my suggestion...
    I HOPE you CHOKE on a TURNIP and DIE.
    Thx
    Muhammad TariqF&#117;ck you.
    On your way out please make sure to stop in at Google and search for "Forums for arrogant fuckwits". That's the kind of resource you need.

Maybe you are looking for