System Taking too much time in transaction j1i5

Hi,
While updating my RG1 Register system taking too much time (4 hours for 10 items).So how I can solve this problem?
Pleasse explain detail procedure?

Dear Lalit,
             It may be related to technical problem like Net work etc.. so take help of the BASIS people they may help you on that.
I hope it will help you
Regards,
Murali.

Similar Messages

  • SAP GUI taking too much time to open transactions

    Hi guys,
    i have done system copy from Production to Quality server completely.
    after that i started SAP on Quality server, it is taking too much time for opening SAP transactions (it is going in compilation mode).
    i started SGEN, but it is giving time_out errors. please help me on this issue.
    MY hardware details on quality server
    operating system : SuSE Linux 10 SP2
    Database : 10.2.0.2
    SAP : ECC 6.0  SR2
    RAM size : 8 GB
    Hard disk space : 500 GB
    swap space : 16 GB.
    regards
    Ramesh

    Hi,
    >i started SGEN, but it is giving time_out errors. please help me on this issue.
    You are supposed to run SGEN as a batch job and so, it should be possible  to get time out errors.
    I've seen Full SGEN lasting from 3 hours on high end systems up to 8 full days on PC hardware...
    Regards,
    Olivier

  • Why it is taking too much time to kill the process?

    Hi All,
    Today,one of my user ran the calc script and the process is taking too much time, then i kill the process. I am wondering about one thing here even it is taking too long to kill the process, generally it will not take more than 2 sec. I did this through EAS.
    After that I ran Maxl statement
    alter system kill request 552599515;
    there is no use at all.
    Please reply if you have any solutions to kill this process.
    Thanks in advance.
    Ram.

    Hi Ram,
    1. Firstly, How much time does your calculation scripts normally run.
    2. While it was running, you can go to the logs and monitor where exactly the script is taking time .
    3. Sometimes, it does take time to cancel a transaction ( as it might be in between).
    4. Maxl is always good to kill ,as you did . It should be succesful . Check the logs what it says ,and also the "sessions" which might say "terminating" and finish it off.
    5. If nothing works ,and in the worst case scenarion , if its taking time without doing anything. Then log off all the users and stop the databas and start it .
    6. Do log off all the users, so that you dont corrupt any filter related sec file.
    Be very careful , if its production ( and I assume you have latest backups)
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Simple APD is taking too much time in running

    Hi All,
    We have one APD created on our developement system which is taking too much time in running.
    This APD is fetching data from a Query having only 1200 records and directly putting into a master attribute.
    The Query is running fine in RSRT transaction and giving output within 5 seconds but in APD if I do display data over Query it is taking too much time.
    The APD is taking arrount 1.20 hours in running.
    Thanks in advance!!

    Hi,
    When a query runs in APD it normally takes much, much longer than it takes in RSRT. Run times such as what you are saying (5secs in RSRT and >1.5 hrs in APD) are quite normal; I've seen them in some of my queries running for several hours in APD as well.
    You just have to wait for it to complete.
    Regards,
    Suhas

  • Sites Taking too much time to open and shows error

    hi, 
    i've setup sharepoint 2013 environement correctly and created a site collection everything was working fine but suddenly now when i am trying to open that site collection or central admin site it's taking too much time to open a page but most of the time
    does not open any page or central admin site and shows following error
    event i go to logs folder under 15 hive but nothing useful found please tell me why it takes about 10-12 minutes to open a site or any page and then shows above shown error. 

    This usually happens if you are low on hardware requirements.  Check whether your machine confirms with the required software and hardware requirements.
    https://technet.microsoft.com/en-us/library/cc262485.aspx
    http://sharepoint.stackexchange.com/questions/58370/minimum-real-world-system-requirements-for-sharepoint-2013
    Please remember to up-vote or mark the reply as answer if you find it helpful.

  • Taking too much time to load application

    Hi,
    I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
    I have another 10g server (same version) in which the same application is loading very fast.
    When I checked the apache error logs found this :-
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
    [Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
    [Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    Please HELP ME...

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • Taking too much time incollecting in business content activation

    Hi all,
    I am collecting business content object for activation. I have selected 0fiAA_cha object,while cllecting in the activation but it is taking too much time and then it asks for source
    system authorisation and then throws error maximum run time exceded. i have selected data flow before there.
    What can be the reason for it.
    Please help..

    Hi ,
    You should also always try and have the latest BI content patch installed but I don't think this is the problem. It seems that there
    are alot of objects to collect. Under 'grouping' you can select the option 'only necessary objects', please check if you can
    use this option to  install the objects that you need from content.
    Best Regards,
    Des.

  • Taking too much time using BufferedWriter to write to a file

    Hi,
    I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
    Thanks in advance.
    public String extractItems() throws InternalServerException{
    try{
                   String extractFileName = getExtractFileName();
                   FileWriter fileWriter = new FileWriter(extractFileName);
                   BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
                   CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
    System.out.println("Before -1");
                   CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
    System.out.println("After -1");
              PrintWriter out = new PrintWriter(bufferWrt);
    System.out.println("Before -2");
              TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
    System.out.println("After -2");
    XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
    Enumeration allitems = itemSet.allItems();
    System.out.println("the batch size : " +itemSet.getBatchSize());
    XDForm frm = itemSet.getXDForm();
    XDFormProperty[] props = frm.getXDFormProperties();
    System.out.println("Before -3");
    bufferWrt.newLine();
    long startTime ,startTime1 ,startTime2 ,startTime3;
    startTime = System.currentTimeMillis();
    System.out.println("time here is--before-while : " +startTime);
    while(allitems.hasMoreElements()){
    String aRow = "";
    XDItem item = (XDItem)allitems.nextElement();
    for(int i =0 ; i < props.length; i++){
         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --new: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    startTime3 = System.currentTimeMillis();
    System.out.println("time here is--after-while : " +startTime3);
                   out.close();//added by rosmon to check extra time taken for extraction//
    bufferWrt.close();
    fileWriter.close();
    System.out.println("After -3");
    return extractFileName;
    catch(Exception e){
                   e.printStackTrace();
    throw new InternalServerException(e.getMessage());

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • Load is taking too much time & going to short dump.

    Hi,
                We have around 1000 records to be loaded from r/3 . The  load is taking too much time and it is still running.
    When tried to manually update the records ,we are getting an error ' converting  the rate into indirect quote: CAD/USD( ex. rate type EURX). Did any one faced this type of error.
    Thanks,
    APR

    Hi,
    what are you trying to do? If you wanna load exchange rates this can be done via context menu of the appropriate source system ( transfer global settings/exchange rates)

  • While condition is taking too much time

    I have a query that returns around 2100 records ( not many ! ) , when I am processing my result set with a while condition , it's taking too much time ( around 30 seconds ). Here is the code
    public static GroupHierEntity load(Connection con)
         throws SQLException
         internalCustomer=false;
         String customerNameOfLogger = com.photomask.framework.ControlServlet.CUSTOMER_NAME;
    if ( customerNameOfLogger.startsWith("DPI") || customerNameOfLogger.startsWith("DUPONT") || customerNameOfLogger==null || customerNameOfLogger.equals("") ||customerNameOfLogger.equals("Unavailable") )
    { internalCustomer=true;}
    // System.out.println(" ***************** customer name of logger " + com.photomask.framework.ControlServlet.CUSTOMER_NAME + "internal customer " + internalCustomer);
    // show all groups to internal customers and only their customer groups for external customers
    if (internalCustomer) {
              stmtLoad = con.prepareStatement(sqlLoad);
         ResultSet rs = stmtLoad.executeQuery();
         return new GroupHierEntity(rs); }
         else
         stmtLoadExternal = con.prepareStatement(sqlLoadExternal);
         stmtLoadExternal.setString(1, customerNameOfLogger);
         stmtLoadExternal.setString(2, customerNameOfLogger);
         // System.out.println("***** sql " +sqlLoadExternal);
         ResultSet rs = stmtLoadExternal.executeQuery();
    return new GroupHierEntity(rs);
    GroupHierEntity ge = GroupHierEntity.load(con);
    while(ge.next())
    lvl = ge.getInt("lvl");
    oid = ge.getLong("oid");
    name = ge.getString("name");
    if(internalCustomer) {
    if (lvl == 2)
    int i = getAlphaIndex(name);
    super.setAppendRoot(alphaIndex);
    gn = new GroupListDataNode(lvl+1,oid,name);
    gn.setSelectable(true);
    this.addNode(gn);
    count++;
    System.out.println("*** count "+ count);
    ge.close();
    ========================
    Then I removed every thing in the while clause and just run as it is , still it is taking same time ( 30 secs )
    while(ge.next())
    {count++;}
    Why the while condition ( ge.next() ) is taking so much time ? Is there any other efficient way of reading the result set ?
    Thanks ,
    bala

    I tried all these things. The query is not taking much time ( 1 sec ). but the resultset.next() is taking too much time. I counted the time by putting System.out.pr.. at various points to see whuch is taking how much time.
    executeQuery() is only taking 1 sec. Processing the result set ( moving the cursor to next position ) is taking too much time.
    I have similar queries that return some 800 rows , that only takes 1 sec.
    I have doubt on resultset.next(). Any other alternative ?

  • ODS to CUBE load taking too much time..

    Hi all ,
    we are loading the data from our ZODS to ZCUBE, but the data load is taking too much time , we haven't created any indexes , we alsotried by making infosource for the ODS but still tha same problem .. It is always showing 0 from 345674 records that is the records are not getting extracted from ODS .
    Can anybody help me in this regards , it is a bit urgent ..
    Thanks in advance.

    Hi,
    there are a few you can check. First you should check if this job hasn't ended in a dump with ST22.
    The next thing you can do, if the job doesn't end abnormaly, is to reduce the amount of records processed at the same time. Sometimes the system has trouble if the amount of records that it has to process is too large. Go to the InfoPackage -> DataS. Default Data Transfer -> Fill the maximum to 10% of de the default value. Try to run the load again.
    If the job still doesn't finish then you have to check wether or not there are any ABAP routines and/or formula involved in the update rule ? Maybe they running in a loop.
    regards,
    Raymond Baggen
    Uphantis bv

  • Importing is taking too much time (2 DAYS)

    Dear All,
    I'm Importing below support packages together in a queue @ SAP Solution manger 4  .
    SAPKB70015             Basis Support Package 15 for 7.00
    SAPKA70015             ABA Support Package 15 for 7.00
    SAPKITL426             ST 400: Patch 0016, CRT for SAPKB70015
    SAPKIBIIP6             BI_CONT 703: patch 0006
    SAPKIBIIP7             BI_CONT 703: patch 0007
    SAPKIBIIP8             BI_CONT 703: patch 0008
    SAPK-40010INCPRXRPM    CPRXRPM 400: patch 0010
    SAPK-40011INCPRXRPM    CPRXRPM 400: patch 0011
    SAPK-40012INCPRXRPM    CPRXRPM 400: patch 0012
    SAPKIPYJ7E             PI_BASIS 2005_1_700: patch 0014
    SAPKW70016             BW Support Package 16 for 7.00
    importing is taking too much time (2 DAYS) in main import phase, I have seen SLOG, there are many rows " I am waiting 1 sec and 6 sec . and also checked transaction code STMS all support packages  imported except  one support package "SAPKW70016  "
    Please advice.
    Best Regards'
    HE

    Hello Mohan,
    The DBTABLOG table does get large, the best is to switch off logging. If that's not possible, increase the frequency of your delete job, also explore one more alternative have a look at the archival object: BC_DBLOGS, you could archive old records (in accordance with your customer's data retention policies) to reduce the size of the table.
    Also, have a look at the following notes, they will advise you on how to improve the performance of your delete job:
    Note 531923 - Audit Trail: Indexes on table DBTABLOG
    Note 579980 - Table logs: Performance during access to DBTABLOG
    Regards,
    Siddhesh

  • Archive Delete job taking too much time - STXH Sequential Read

    Hello,
    We have been running Archive sessions in our production system in last couple of months. We use SARA and selecting the appropriate variants for WRITE, DELETE and STORAGE options.
    Currently we use the Archive object FI_DOCUMNT and the write job is finished as per the normal time (5 hrs based on the selection criteria). After that normally the delete job is used to complete in 1hr or less than 2hrs always (in last 3 months).
    But in last few days the delete job is taking too much to complete (around 8 - 10hrs) when I monitor the system found that the Sequential Read for table STXH is taking too much time to read and it seems this is the cause.
    Could you please provide a solution for the same, so that the job will run faster as earlier.
    Thanks for your time
    Shyl

    Hi Juan,
    After the statistics run the performance is quite good. Now the job getting finished as expected.
    Thanks. Problem solved
    Shyl

  • Delta Sync taking too much time on refreshing of tables

    Hi,
    I am working on Smart Service Manager 3.0. I have come across a scenario where the delta sync is taking too much time.
    It is required that if we update the stock quantity then the stock should be updated instantaneously.
    To do this we have to refresh 4 stock tables at every sync so that the updated quantity is reflected in the device.
    This is taking a lot of time (3 to 4 min) which is highly unacceptable from user perspective.
    Please could anyone suggest something so that  only those table get refreshed upon which the action is carried out.
    For eg: CTStock table should get refreshed only If i transfer a stock and get updated accordingly
    Not on any other scenario like the changing  status from accept to driving or any thing other than stocks.
    Thanks,
    Star
    Tags edited by: Michael Appleby

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

Maybe you are looking for

  • Can I manage my iTunes Library from my iPhone?

    I have a playlist in iTunes on my PC called 'iPhone', which is selected to sync with my iPhone 4. I tend to sync via WiFi because I often don't have access to the computer used to manage my library (it is used for streaming films to the TV). During t

  • Header & footer margins need to be different from body margins

    I am writing a paper that will be printed double-sided, using mirror margins. The paper must include sidebar information such as call outs and comments that will be used by the reader to navigate the document quickly to specific information and to be

  • Unsupported Configuration

    When Protected Mode cannot launch due to an unsupported configuration, a dialog alerts me that Protected mode is unavailable. "Adobe Reader cannot open in Protected Mode due to a problem with your system configuration. Would you like to open Adobe Re

  • Program run in back ground

    Dear friends, Can u please help me to run my program in back graound. Is there any code for this. Is there any process for this task. thanks and Regards vivek

  • App World identity error

    I just upgraded my blackberry app world to the latest version and after installing it as usual i have to install the blackberry ID again...after numerous tries it happens that i couldn't install my blackberry ID and it prompts out a msg stating that