Code  taking too much time to output

Following  code is taking too much time to execute . (some time giving Time_out )
ind = sy-tabix.
    SELECT SINGLE * FROM mseg INTO mseg
       WHERE bwart = '102' AND
             lfbnr = itab-mblnr AND
             ebeln = itab-ebeln AND
             ebelp = itab-ebelp.
    IF sy-subrc = 0.
      DELETE itab INDEX ind.
      CONTINUE.
Is there any other way to write this code to reduce the output time.
Thanks

Hi,
I think you are executing this code in a loop which is causing the problem. The rule is "Never put SELECT statements inside a loop".
Try to rewrite the code as follows:
* Outside the loop
SELECT *
from MSEG
into table lt_mseg
for all entries in itab
where bwart = '102' AND
lfbnr = itab-mblnr AND
ebeln = itab-ebeln AND
ebelp = itab-ebelp.
Then inside the loop, do a READ on the internal table
Loop at itab.
read table lt_mseg with key bwart = '102'. "plus other conditions
if sy-subrc ne 0.
delete itab. "index is automatically determined here from SY-TABIX
endif.
endloop.
I think this should optimise performance. You can check your code's performance using SE30 or ST05.
Hope this helps! Please revert if you need anything else!!
Cheers,
Shailesh.
Always provide feedback for helpful answers!

Similar Messages

  • Taking too much time to load application

    Hi,
    I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
    I have another 10g server (same version) in which the same application is loading very fast.
    When I checked the apache error logs found this :-
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
    [Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
    [Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    Please HELP ME...

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • Taking too much time using BufferedWriter to write to a file

    Hi,
    I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
    Thanks in advance.
    public String extractItems() throws InternalServerException{
    try{
                   String extractFileName = getExtractFileName();
                   FileWriter fileWriter = new FileWriter(extractFileName);
                   BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
                   CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
    System.out.println("Before -1");
                   CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
    System.out.println("After -1");
              PrintWriter out = new PrintWriter(bufferWrt);
    System.out.println("Before -2");
              TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
    System.out.println("After -2");
    XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
    Enumeration allitems = itemSet.allItems();
    System.out.println("the batch size : " +itemSet.getBatchSize());
    XDForm frm = itemSet.getXDForm();
    XDFormProperty[] props = frm.getXDFormProperties();
    System.out.println("Before -3");
    bufferWrt.newLine();
    long startTime ,startTime1 ,startTime2 ,startTime3;
    startTime = System.currentTimeMillis();
    System.out.println("time here is--before-while : " +startTime);
    while(allitems.hasMoreElements()){
    String aRow = "";
    XDItem item = (XDItem)allitems.nextElement();
    for(int i =0 ; i < props.length; i++){
         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --new: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    startTime3 = System.currentTimeMillis();
    System.out.println("time here is--after-while : " +startTime3);
                   out.close();//added by rosmon to check extra time taken for extraction//
    bufferWrt.close();
    fileWriter.close();
    System.out.println("After -3");
    return extractFileName;
    catch(Exception e){
                   e.printStackTrace();
    throw new InternalServerException(e.getMessage());

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • While condition is taking too much time

    I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
    public static GroupHierEntity load(Connection con)
         throws SQLException
         internalCustomer=false;
         String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
    if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
    { internalCustomer=true;}
    // System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
    // show all groups to internal customers and only their customer groups for external customers
    if (internalCustomer) {
              stmtLoad = con.prepareStatement(sqlLoad);
         ResultSet rs = stmtLoad.executeQuery();
         return new GroupHierEntity(rs); }
         else
         stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
         stmtLoadExternal.setString(1, customerNameOfLogger);
         stmtLoadExternal.setString(2, customerNameOfLogger);
         // System.out.println("***** sql " +sqlLoadExternal);
         ResultSet rs = stmtLoadExternal.executeQuery();
    return new GroupHierEntity(rs);
    GroupHierEntity ge = GroupHierEntity.load(con);
    while(ge.next())
    lvl = ge.getInt("lvl");
    oid = ge.getLong("oid");
    name = ge.getString("name");
    if(internalCustomer) {
    if (lvl == 2)
    int i = getAlphaIndex(name);
    super.setAppendRoot(alphaIndex);
    gn = new GroupListDataNode(lvl+1,oid,name);
    gn.setSelectable(true);
    this.addNode(gn);
    count++;
    System.out.println("*** count "+ count);
    ge.close();
    ========================
    Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
    while(ge.next())
    {count++;}
    Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
    Thanks ,
    bala

    I tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
    executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
    I have similar queries that return some 800 rows , that only takes 1 sec.
    I have doubt on resultset.next(). Any other alternative ?

  • Importing is taking too much time (2 DAYS)

    Dear All,
    I'm Importing below support packages together in a queue @ SAP Solution manger 4  .
    SAPKB70015             Basis Support Package 15 for 7.00
    SAPKA70015             ABA Support Package 15 for 7.00
    SAPKITL426             ST 400: Patch 0016, CRT for SAPKB70015
    SAPKIBIIP6             BI_CONT 703: patch 0006
    SAPKIBIIP7             BI_CONT 703: patch 0007
    SAPKIBIIP8             BI_CONT 703: patch 0008
    SAPK-40010INCPRXRPM    CPRXRPM 400: patch 0010
    SAPK-40011INCPRXRPM    CPRXRPM 400: patch 0011
    SAPK-40012INCPRXRPM    CPRXRPM 400: patch 0012
    SAPKIPYJ7E             PI_BASIS 2005_1_700: patch 0014
    SAPKW70016             BW Support Package 16 for 7.00
    importing is taking too much time (2 DAYS) in main import phase, I have seen SLOG, there are many rows " I am waiting 1 sec and 6 sec . and also checked transaction code STMS all support packages  imported except  one support package "SAPKW70016  "
    Please advice.
    Best Regards'
    HE

    Hello Mohan,
    The DBTABLOG table does get large, the best is to switch off logging. If that's not possible, increase the frequency of your delete job, also explore one more alternative have a look at the archival object: BC_DBLOGS, you could archive old records (in accordance with your customer's data retention policies) to reduce the size of the table.
    Also, have a look at the following notes, they will advise you on how to improve the performance of your delete job:
    Note 531923 - Audit Trail: Indexes on table DBTABLOG
    Note 579980 - Table logs: Performance during access to DBTABLOG
    Regards,
    Siddhesh

  • Job is taking too much time

    Hi,
    I am running one SQL code that is taking 1 hrs. but when I schedule this code in JOB using package it is taking 5 hrs.
    could anybody suggest why it is taking too much time?
    Regards
    Gagan

    Use TRACE and TKPROF with wait events to see where time is being spent (or waisted).
    See these informative threads:
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    HOW TO: Post a SQL statement tuning request - template posting
    Also you can use V$SESSION and/or V$SESSION_LONGOPS to see what code is currently executing.

  • Job is taking too much time during Delta loads

    hi
    when i tried to extract Delta records from R3 for Standard Extractor 0FI_GL_4 it is taking 46 mins while there are very less no. of delta records(193 records only).
    PFA the R3 Job log. the major time is taking in calling the Customer enhacement BW_BTE_CALL_BW204010_E.
    please let me know why this is taking too much time.
    06:10:16  4 LUWs confirmed and 4 LUWs to be deleted with FB RSC2_QOUT_CONFIRM_DATA
    06:56:46  Call up of customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 193 records
    06:56:46  Result of customer enhancement: 193 records
    06:56:46  Call up of customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 193 records
    06:56:46  Result of customer enhancement: 193 records
    06:56:46  Asynchronous sending of data package 1 in task 0002 (1 parallel tasks)
    06:56:47  IDOC: InfoIDOC 2, IDOC no. 121289649, duration 00:00:00
    06:56:47  IDOC: Begin 09.05.2011 06:10:15, end 09.05.2011 06:10:15
    06:56:48  Asynchronous sending of InfoIDOCs 3 in task 0003 (1 parallel tasks)
    06:56:48  Through selection conditions, 0 records filtered out in total
    06:56:48  IDOC: InfoIDOC 3, IDOC no. 121289686, duration 00:00:00
    06:56:48  IDOC: Begin 09.05.2011 06:56:48, end 09.05.2011 06:56:48
    06:56:54  tRFC: Data package 1, TID = 3547D5D96D2C4DC7740F217E, duration 00:00:07, ARFCSTATE =
    06:56:54  tRFC: Begin 09.05.2011 06:56:47, end 09.05.2011 06:56:54
    06:56:55  Synchronous sending of InfoIDOCs 4 (0 parallel tasks)
    06:56:55  IDOC: InfoIDOC 4, IDOC no. 121289687, duration 00:00:00
    06:56:55  IDOC: Begin 09.05.2011 06:56:55, end 09.05.2011 06:56:55
    06:56:55  Job finished
    Regards
    Atul

    Hi Atul,
    Have you written any customer exit code . If yes check for the optimization for it .
    Kind Regards,
    Ashutosh Singh

  • Query is taking too much time

    hi
    The following query is taking too much time (more than 30 minutes), working with 11g.
    The table has three columns rid, ida, geometry and index has been created on all columns.
    The table has around 5,40,000 records of point geometries.
    Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
    SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
    sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
    regards

    I have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
    a.ida='CORD' AND
    b.idat='CORD' AND
    a.rid !=b.rid AND
    sdo_equal(a.geometry, b.geometry)='TRUE'
    ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
    So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
    and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
    Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
    And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
    Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
    [ c o d e ]
    <code/results here>
    [ / c o d e ]
    that way the original formatting is kept and it makes things much easier to read.
    Regards,
    Stefan

  • Delta Sync taking too much time on refreshing of tables

    Hi,
    I am working on Smart Service Manager 3.0. I have come across a scenario where the delta sync is taking too much time.
    It is required that if we update the stock quantity then the stock should be updated instantaneously.
    To do this we have to refresh 4 stock tables at every sync so that the updated quantity is reflected in the device.
    This is taking a lot of time (3 to 4 min) which is highly unacceptable from user perspective.
    Please could anyone suggest something so that  only those table get refreshed upon which the action is carried out.
    For eg: CTStock table should get refreshed only If i transfer a stock and get updated accordingly
    Not on any other scenario like the changing  status from accept to driving or any thing other than stocks.
    Thanks,
    Star
    Tags edited by: Michael Appleby

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • BAPI Function taking too much time

    Hi,
    One BAPI function BAPI_INCOMINGINVOICE_CREATE
    taking too much time to process the data. do we have any alternative to reduce the time taken by the BAPI function ?
    like by using any SAP Notes or any other alternative function or by any other way.
    Thanks and regards,
    Shailendra

    HI,
    Please apply the OSS Note 830717. I applied this note and ran transaction code FBZ0 and the "withholding tax code" changed from XX to 03.
    Reward if helpful

  • Simple APD is taking too much time in running

    Hi All,
    We have one APD created on our developement system which is taking too much time in running.
    This APD is fetching data from a Query having only 1200 records and directly putting into a master attribute.
    The Query is running fine in RSRT transaction and giving output within 5 seconds but in APD if I do display data over Query it is taking too much time.
    The APD is taking arrount 1.20 hours in running.
    Thanks in advance!!

    Hi,
    When a query runs in APD it normally takes much, much longer than it takes in RSRT. Run times such as what you are saying (5secs in RSRT and >1.5 hrs in APD) are quite normal; I've seen them in some of my queries running for several hours in APD as well.
    You just have to wait for it to complete.
    Regards,
    Suhas

  • Sites Taking too much time to open and shows error

    hi, 
    i've setup sharepoint 2013 environement correctly and created a site collection everything was working fine but suddenly now when i am trying to open that site collection or central admin site it's taking too much time to open a page but most of the time
    does not open any page or central admin site and shows following error
    event i go to logs folder under 15 hive but nothing useful found please tell me why it takes about 10-12 minutes to open a site or any page and then shows above shown error. 

    This usually happens if you are low on hardware requirements.  Check whether your machine confirms with the required software and hardware requirements.
    https://technet.microsoft.com/en-us/library/cc262485.aspx
    http://sharepoint.stackexchange.com/questions/58370/minimum-real-world-system-requirements-for-sharepoint-2013
    Please remember to up-vote or mark the reply as answer if you find it helpful.

  • Creative Cloud is taking too much time to load and is not downloading the one month trial for Photoshop I just paid money for.

    Creative Cloud is taking too much time to load and is not downloading the one month trial for Photoshop I just paid money for.

    stop the download if it's stalled, and restart your download.

  • Hi from the last two days my iphone have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.

    Hi,  from the last two days my iphone( iphone 4 with ios 5) have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.

    The Basic Troubleshooting Steps are:
    Restart... Reset... Restore...
    iPhone Reset
    http://support.apple.com/kb/ht1430
    Try this First... You will Not Lose Any Data...
    Turn the Phone Off...
    Press and Hold the Sleep/Wake Button and the Home Button at the Same Time...
    Wait for the Apple logo to Appear and then Disappear...
    Usually takes about 15 - 20 Seconds... ( But can take Longer...)
    Release the Buttons...
    Turn the Phone On...
    If that does not help... See Here:
    Backing up, Updating and Restoring
    http://support.apple.com/kb/HT1414

Maybe you are looking for

  • Can I use the same iTunes folder on multiple pcs?

    Hello, I am using a hp laptop and desktop pc to stream (home sharing) to my Apple TV 3. My laptop is running windows 8 and the desktop is running XP Pro. Firstly, can I organise my iTunesw (video and audio) library to a western digital 'elements' ext

  • Installing SAP NetWeaver 7.0 ABAP Trial Version on x64 bit machine

    Hi, I am trying to install SAP NetWeaver 7.0 ABAP Trial Version onto my server which is running Windows Server 2003 Standard x64 (Processor is AMD Dual-Core Opteron 1210 / 1.8 GHz) however the installation fails at 17%. Upon reviewing the the logfile

  • App Store updates no longer works over Bluetooth on iTunes 11

    Up throught iTunes 10.7 I could open the App Store on my iPad and select the updates tab and just click update all, to download all the available updates, and a blue bar would grow across the bottom of the app icons as the update download and would s

  • Use Any Song As An Alarm?

    Hi, With the iPhone, is it possible to use any Song as the sound for an Alarm that you've set? If not, why not? I understand the restrictions with Ringtones, as Ringtones have their own industry. But there's no industry for Alarm Sounds, is there. So

  • How do i get my computer to remember useer names & passwords

    how do i get my computer to remember my username & passwords