Transaction EA60 is slow

Hi,
Transaction EA60 - spooling of bills, is taking too much time compared to before. It is actually selecting 100,000 records from table erdk. The indexes are as follows:
1. Print date [DRUCKDAT]
2. Contract account [vkont]
3. Reconciliation Key for General Ledger
4. Reason for creating print document [ERGRD]
5. Line items are archived [ ITEMS_ARCHIVED ] , Number of print document [ OPBEL ]
6. Posting Date in the Document [ BUDAT ]
7. Business Partner Number [ PARTNER ]
8. Number of the substitute FI-CA document [ ABWBL ]
Should there be an issue on the index... should I create new? if yes then on which field(s) ?

Hi,
I guess this is a standard Transaction. Clean up all the older spools. Also archive old data from tables.

Similar Messages

  • Should DML transaction be so slow?

    we are running Oracle 10.2.0.1.0 on powerful enough Xeon machines (1-4 Xeons, 8-16GB RAM) and 64-bit Windows 2003. Applications is strictly data warehousing, which means no primary/foreign keys on fact tables, and no small transactions. Facts tables contain 10-20 million rows.
    Now the problem. If I delete 1-2 million rows from 20 million rows table, it takes about an hour. Should it take so long? Updates and inserts are no so long - inserting 100K rows takes 1 minute. Is not it too slow? Maybe there are some settings we can adjust to optimize DB for data warehousing rather than for OLTP?

    <Should DML transaction be so slow? >
    Probably not, but its hard to say. Deleting 2M rows in an hour sounds slow. You need to get execution plans on your deletes to see what is happening.
    It sounds like you're deleting across partitions in the same statement. You could try deleting rows for each partition with individual statements to see if that's faster.

  • Flashback and transaction query very slow

    Hello. I was wondering if anyone else has seen transaction queries be really slow and if there is anything I can do to speed it up? Here is my situation:
    I have a database with about 50 tables. We need to allow the user to go back to a point in time and "undo" what they have done. I can't use flashback table because multiple users can be making changes to the same table (different records) and I can't undo what the other users have done. So I must use the finer granularity of undoing each transaction.
    I have not had a problem with the queries, etc. I basically get a cursor to all the transactions in each of the tables and order them backwards (since all the business rules must be observed). However, getting this cursor takes forever. From that cursor, I can execute the undo_sql. In fact, I once had a cursor that did "union all" on each table and even if the user only modified 1 table, it took way too long. So now I do a quick count based on the ROWSCN (running 10g and tables have ROWDEPENDANCIES) being in the time needed to find out if this table has been touched. Based on that, I can create a cursor only for the tables that have been touched. This helps. But it is still slow especially compared to any other query I have. And if the user did touch a lot of tables, it is still way too slow.
    Here is an example of part of a query that is used on each table:
    select xid, commit_scn, logon_user, undo_change#, operation, table_name, undo_sql
    from flashback_transaction_query
    where operation IN ('INSERT', 'UPDATE', 'DELETE')
      and xid IN (select versions_xid
                  from TABLE1
                  versions between SCN p_scn and current_scn
                  where system_id = p_system_id)
      and table_name = UPPER('TABLE1')Any help is greatly appreciated.
    -Carmine

    Anyone?
    Thanks,
    -Carmine

  • Transactional replication very slow with indexes on Subscriber table

    I have setup Transactional Replication for one of our databases where one table with about 5mln records is replicated to a Subsriber database. With every replication about 500-600.000 changed records are send to the Subscriber.
    Since one month I see very strange behaviour when I add about 10 indexes to the Subscriber table. As soon as I have added the indexes replication speed becomes extremely slow (almost 3 hours for 600k records). As soon as I remove the indexes the replication
    is again very fast, about 3 minutes for the same amount of records.
    I've searched a lot on the internet to solve this issue but can't find any explaination for this strange behaviour after adding the indexes. As far as I know it doesn't have to be a problem to add indexes to a Subscriber table, and it hasn't been before on
    another replication configuration we use.
    Some information from the Replication Log:
    With indexes on the Subscriber table
    Total Run Time (ms) : 9589938 Total Work Time : 9586782
    Total Num Trans : 3 Num Trans/Sec : 0.00
    Total Num Cmds : 616245 Num Cmds/Sec : 64.28
    Total Idle Time : 0 
    Writer Thread Stats
    Total Number of Retries : 0 
    Time Spent on Exec : 9580752 
    Time Spent on Commits (ms): 2687 Commits/Sec : 0.00
    Time to Apply Cmds (ms) : 9586782 Cmds/Sec : 64.28
    Time Cmd Queue Empty (ms) : 5499 Empty Q Waits > 10ms: 172
    Total Time Request Blk(ms): 5499 
    P2P Work Time (ms) : 0 P2P Cmds Skipped : 0
    Reader Thread Stats
    Calls to Retrieve Cmds : 2 
    Time to Retrieve Cmds (ms): 10378 Cmds/Sec : 59379.94
    Time Cmd Queue Full (ms) : 9577919 Full Q Waits > 10ms : 6072
    Without indexes on the Subscriber table
    Total Run Time (ms) : 89282 Total Work Time : 88891
    Total Num Trans : 3 Num Trans/Sec : 0.03
    Total Num Cmds : 437324 Num Cmds/Sec : 4919.78
    Total Idle Time : 0 
    Writer Thread Stats
    Total Number of Retries : 0 
    Time Spent on Exec : 86298 
    Time Spent on Commits (ms): 282 Commits/Sec : 0.03
    Time to Apply Cmds (ms) : 88891 Cmds/Sec : 4919.78
    Time Cmd Queue Empty (ms) : 1827 Empty Q Waits > 10ms: 113
    Total Time Request Blk(ms): 1827 
    P2P Work Time (ms) : 0 P2P Cmds Skipped : 0
    Reader Thread Stats
    Calls to Retrieve Cmds : 2 
    Time to Retrieve Cmds (ms): 2812 Cmds/Sec : 155520.63
    Time Cmd Queue Full (ms) : 86032 Full Q Waits > 10ms : 4026
    Can someone please help me with this issue? Any ideas? 
    Pim 

    Hi Megens:
    Insert statement might be slow with not only indexes and few others things too
    0) SQL DB Blocking during inserts
    1) If any insert triggers are existed
    2) Constraints - if any
    3) Index fragmentation
    4) Page splits / fill factor
    Without indexes inserts will be fast because, each time when new row going to insert to the table, SQL Server will do
    1) it will check for the room, if no room page splits will happen and record will placed at right place
    2) Once the record updated all the index should be update
    3) all these extra update work will cause can make insert statement bit slow
    Its better to have index maintenance jobs frequently to avoid fragmentation.
    If every thing is clear on SQL Server Side, you need look up on DISK IO, N/W Latency between the servers and so on
    Thanks,
    Thanks, Satish Kumar. Please mark as this post as answered if my anser helps you to resolves your issue :)

  • SM58 Transaction Recorded Very Slow

    Dear Experts,
    I have many tRFC in PI system in SM58 with status transaction recorded. This seems to be because there were an unusual data (file to idoc) from 1 interface that suddenly sends more data than usual. This happens at midnight, since then we have a queue at in SM58 with many  transaction recorded (until tonight).
    Strange things is actually that when I try to execute the LUW (F6), the transaction could be processed successfully, even though sometimes after pressing F6, the processing time could take some time (marked by the loading cursor). When the processing time is long, I just stop the transaction, re-open SM58, then the transaction that was executed before will be in the "executing" status and then I execute LUW for the transaction again and it never takes a long time to execute it.
    Trying to execute LUWs and ticking the recorded status would end up no transactions executed.
    Checking in SMQS for the destination, it is rarely that the actual connection reach the maximum connection. Seeing QRFC resource in SMQS, the Resource Status shows OK.
    Going to SM51 then Server Name > Information > Queue Information, there are no waiting requests.
    Actually the transactions are do processed, it just they are processed very slow and this impact to the business.
    What can I do when this happens? How to really re-process those recorded transactions?
    Could this be because the receiver resources? How to check for this?

    Dear Experts,
    According to this link,
    http://wiki.scn.sap.com/wiki/pages/viewpage.action?original_fqdn=wiki.sdn.sap.com&pageId=145719978
    "Transaction recorded" usually happens when A. processing idocs or B. BW loads.
    A.If it occurs when processing idocs you will see function module "IDOC_INBOUND_ASYNCH" mentioned in SM58.
    Check also that the idocs are being processed in the background and not in the foreground.
    Ensure that background processing is used for ALE communications.
    Report RSEOUT00 (outbound)can be configured to run very specifically for the high volume message types on their system. Schedule regular runs of report RESOUT00 it can be run for
    IDoc Type and\or Partner etc..
    To set to background processing for Outbound idocs do the following:
    -> go to transaction WE20 -> Select Partner Select Outbound Message Type and change the processing method from
    "Transfer IDoc Immedi." to "Collect IDocs".
    Reading that explanations, should the setting of IDoc processing to background (Collect IDocs) is done in PI or the receiver?
    If the IDocs is processed collectively, will it make the sending IDoc process faster from PI to the receiver system? What is the explanation that if the IDoc is processed collectively, it would make the IDoc sending to be faster?
    Why should we use RSOUT00 report when we already have SM58, and we can execute LUWs in SM58?
    Thank you,
    Suwandi C.

  • PutAll() with Transactions is slow

    Hi Guys,
    I am looking into using transactions, but they seem to be really slow - perhaps I am doing something silly. I've written a simple test to measure the time taken by putAll() with and without transactions and putAll() with transactions takes nearly 4-5 times longer than without transactions.
    Results:
    Average time taken to insert without transactions 210ms
    Average time taken to insert WITH transactions 1210ms
    Test code:
    public class TestCoherenceTransactions {
         private static final int MAP_SIZE = 5000;
         private static final int LIST_SIZE = 15;
         private static final int NO_OF_ITERATIONS = 50;
         public static void main (String[] args) {
              NamedCache cache = CacheFactory.getCache("dist-cache");          
              Map dataToInsert = new HashMap();
              for (int i=0;i < MAP_SIZE; i++) {
                   List value = new ArrayList();
                   for (int j = 0; j < LIST_SIZE; j++) {
                        value.add(j+"QWERTYU");     
                   dataToInsert.put(i,value);
              long timeTaken = 0;
              long startTime = 0;
              for (int i = 0; i < NO_OF_ITERATIONS; i++) {     
                   cache.clear();     
                   startTime = System.currentTimeMillis();
                   cache.putAll(dataToInsert);
                   timeTaken += System.currentTimeMillis() - startTime;     
              System.out.println("Average time taken to insert without transactions " + timeTaken/NO_OF_ITERATIONS );
              timeTaken = 0;
              for (int i = 0; i < NO_OF_ITERATIONS; i++) {
                   cache.clear();     
                   startTime = System.currentTimeMillis();
                   TransactionMap mapTx = CacheFactory.getLocalTransaction(cache);
                   mapTx.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
                   mapTx.setConcurrency(TransactionMap.CONCUR_PESSIMISTIC);
                   Collection txnCollection = Collections.singleton(mapTx);
                   mapTx.begin();
                   mapTx.putAll(dataToInsert);
                   CacheFactory.commitTransactionCollection(txnCollection, 1);
                   timeTaken += System.currentTimeMillis() - startTime;     
              System.out.println("Average time taken to insert WITH transactions " + timeTaken/NO_OF_ITERATIONS);
              System.out.println("cache size " + cache.size());
    Am I misssing something obvious?? Can't understand why transactions are really slow. Any pointers would be very much appreciated.
    Thanks
    K

    Hi,
    TransactionMap is this slow because it locks and unlocks all entries (one by one as there is no lock-many funcitonality) you modify (or even read if you used at least REPEATABLE_READ isolation level) during the lifetime of the transaction if you are using CONCUR_PESSIMISTIC or CONCUR_OPTIMISTIC. Since each lock and unlock operation needs a network call (and that network call needs to be backed up from the primary node to the backup node in a distributed cache), the more entries you need to lock/unlock, the more latency is introduced due to this locking unlocking.
    Best regards,
    Robert

  • Custom Transaction running slow on R/3 4.7 ,Oracle 10.2.0.4

    Hi,
    We have a custom transaction for checking the roles related to an transaction.
    We have the same copy of the program in DEE, QAS and PRD(obvious).
    The catch is in DEE the transaction runs very slow, whereas in QAS and PRD the transaction is normal.
    In the debugger mode i found in DEE while accessing AGR_1016 it takes an long time.
    I checked the indexes, tables and extents and noticed that in DEE the extents, size and the table size(physical size) is more than in PRD.
    Whereas, the records are more in PRD and less in DEE.
    Please guide me on what all checks i can perform to find out where exactly the problem is?
    Regards,
    Siddhartha.

    Hi,
    Thanks for the reply.
    The details are given below.
    Please let me know if u need any more information.
    /* THE ACCESS PATH */
    SELECT STATEMENT ( Estimated Costs = 2 , Estimated #Rows = 1 )                                                                               
    5  7 FILTER                                                            
      Filter Predicates                                                                               
    5  6 NESTED LOOPS                                                  
    ( Estim. Costs = 1 , Estim. #Rows=1 )                       
      Estim. CPU-Costs = 14,882 Estim. IO-Costs = 1                                                                               
    5  4 NESTED LOOPS                                              
    ( Estim. Costs = 1 , Estim. #Rows = 1 )                   
      Estim. CPU-Costs = 14,660 Estim. IO-Costs = 1                                                                               
    1 INDEX RANGE SCAN UST12~0                              
    ( Estim. Costs = 1 , Estim. #Rows = 1 )               
    Search Columns: 4                                     
    Estim. CPU-Costs = 9,728 Estim. IO-Costs = 1          
    Access Predicates Filter Predicates                   
    5  3 TABLE ACCESS BY INDEX ROWID AGR_1016                  
    ( Estim. Costs = 1 , Estim. #Rows = 4 )               
    Estim. CPU-Costs = 4,932 Estim. IO-Costs = 1                                                                               
    2 INDEX RANGE SCAN AGR_1016^0                       
    Search Columns: 2                                 
    Estim. CPU-Costs = 3,466 Estim. IO-Costs = 0      
    Access Predicates Filter Predicates                                                                               
    5 INDEX UNIQUE SCAN UST10S~0                                
    Search Columns: 5                                         
    Estim. CPU-Costs = 221 Estim. IO-Costs = 0                
    Access Predicates Filter Predicates                                                                               
    The details of the index UST12~0   
    UNIQUE     Index   UST12~0                      
    Column Name                     #Distinct       
    MANDT                                   7
    OBJCT                                    1,339
    AUTH                                       6,274
    AKTPS                                    2
    FIELD                                      762
    VON                                         6,112
    BIS                                           246
    Last statistics date                           02.08.2008
    Analyze Method                                Sample 80,739 Rows
    Levels of B-Tree                               2
    Number of leaf blocks                      9,588
    Number of distinct keys                   722,803
    Average leaf blocks per key           1
    Average data blocks per key          1
    Clustering factor                                42,765
    Table   AGR_1016                                
    Last statistics date                  24.09.2008
    Analyze Method                       Sample 5,102 Rows
    Number of rows                        5,102
    Number of blocks allocated    51
    Number of empty blocks         0
    Average space                        2,612
    Chain count                              0
    Average row length                 48
    Partitioned                               NO
    NONUNIQUE  Index   AGR_1016~001                 
    Column Name                     #Distinct       
    MANDT                                          4
    PROFILE                                    4,848
    COUNTER                                  10
    Last statistics date                  24.09.2008
    Analyze Method                 Sample 5,102 Rows
    Levels of B-Tree                               1
    Number of leaf blocks                      24
    Number of distinct keys                   5,101
    Average leaf blocks per key             1
    Average data blocks per key            1
    Clustering factor                              1,449
    UNIQUE     Index   AGR_1016^0                   
    Column Name                     #Distinct       
    MANDT                                          4
    AGR_NAME                                   4,725
    COUNTER                                       10
    Last statistics date                  24.09.2008
    Analyze Method                 Sample 5,102 Rows
    Levels of B-Tree                               1
    Number of leaf blocks                         30
    Number of distinct keys                    5,102
    Average leaf blocks per key                    1
    Average data blocks per key                    1
    Clustering factor                          1,410
    If any more info is required..Please tell me.
    Siddhartha

  • How to identify reasons for a very slow transaction?

    Hi All,
    If I see that a SAP standard transaction is very slow for every user, what are the areas that I will have to look to get it resolved.
    All other transactions work quick and only one transaction is very sluggish. How to find the root cause and resolve this?
    Thanks
    Vijay

    Hi,
    I have used Tx Code SGEN to reduce the time to open every Transaction. As opening every transaction require the compilation of some programs, reports etc, which takes a lot of time.
    This transaction S creen GEN erator,  will load(compiled version of ) all the frequently used transactions into the database. Database size will be increased by 2 GB or more.
    Refer this link for more details :
    http://help.sap.com/saphelp_nw70/helpdata/EN/28/52583c65399965e10000000a114084/frameset.htm
    I hope this will help you.
    Best Regards,
    Pradeep Bishnoi
    Edited by: Pradeep Bishnoi on Dec 31, 2008 1:32 PM

  • Transaction executing slow and database undo log increasing soon.

    Dears,
    I developed a transaction which query database many times in a repeater loop and finally generate a SAP MII XML Output Document which I want to display in a html hyper link in MII Navigation.  (Using XacuteQuery and iGrid)
    I found that
    1. if I execute in SAP MII Workbench, the transaction executing very slow, and also the database undo log in D:\oracle\TMI\sapdata2\undo_1\UNDO.DATA1 increasing soon.
    2. If I use MII Schedule Edit to run the transaction, it executing fast.
    Does any one know why?
    Is there any setting can make it executing fast in a MII Workbench?
    Many thanks!
    Ivan

    Hi,
    Can you explain why it's different by using MII Workbench and Scheduler depending on sql  joining and logic?
    My transaction logical is basically as below,
    1. query qualified sfc in SAPME tables
    SELECT *
      FROM (SELECT   s.site, s.sfc, ss.operation_bo, ss.qty_in_queue,
                     ss.qty_in_work, s.priority, s.item_bo, s.shop_order_bo,
                     s.status_bo, ss.sfc_router_bo, ss.step_id, ss.step_sequence,
                     st.status_description, cf.ATTRIBUTE, cf.VALUE
                FROM sfc_step ss,
                     sfc s,
                     sfc_router sr,
                     sfc_routing srt,
                     status st,
                     custom_fields cf
               WHERE sr.handle = ss.sfc_router_bo
                 AND srt.handle = sr.sfc_routing_bo
                 AND s.handle = srt.sfc_bo
                 AND st.handle = s.status_bo
                 AND SUBSTR (s.status_bo, -3) IN ('402', '403', '404')
                 AND sr.handle = ss.sfc_router_bo
                 AND sr.in_use = 'true'
                 AND ((ss.qty_in_queue > 0) OR (ss.qty_in_work > 0))
                 AND s.site = '[Param.1]'
                 AND cf.handle(+) = s.handle
                 AND cf.ATTRIBUTE(+) = 'QTIMECONTROL'
                 [Param.2]
            ORDER BY s.priority DESC, s.sfc)
    WHERE (ATTRIBUTE = 'QTIMECONTROL' AND VALUE != 'N') OR VALUE IS NULL
    2. use Repeater to query sfc's activity_log table
    SELECT   al.site, al.sfc, al.operation, al.operation_revision, op.description,
             al.step_id,
                TO_CHAR (NEW_TIME (date_time, 'PST', 'GMT'),
                         'YYYY-MM-DD'
             || 'T'
             || TO_CHAR (NEW_TIME (date_time, 'PST', 'GMT'), 'HH24:MI:SS')
                                                                     AS date_time,
             TO_CHAR (NEW_TIME (date_time, 'PST', 'GMT'),
                      'YYYY/MM/DD HH24:MI:SS'
                     ) AS complete_date,
             cf.VALUE AS qtime, action_code, ss.operation_bo AS current_op,
             ss.step_id AS current_step_id,
             ss.step_sequence AS current_setp_sequence,
             al.item || ',' || al.item_revision AS item,
             TO_CHAR (SYSDATE, 'YYYY/MM/DD HH24:MI:SS') AS check_time,
             (sysdate-NEW_TIME (date_time, 'PST', 'GMT'))2460 as difference
        FROM activity_log al,
             custom_fields cf,
             sfc s,
             sfc_routing srg,
             sfc_router sr,
             sfc_step ss,
             operation op
       WHERE al.site = '[Param.1]'
         AND (action_code IN( 'COMPLETE' , 'START' , 'SIGNOFF'))
         AND al.sfc = '[Param.2]'
         AND cf.handle(+) =
                   'OperationBO:'
                || al.site
                || ','
                || al.operation
                || ','
                || al.operation_revision
         AND cf.ATTRIBUTE(+) = 'QTIME'
         AND s.handle = srg.sfc_bo
         AND srg.handle = sr.sfc_routing_bo
         AND 'true' = sr.in_use
         AND sr.handle = ss.sfc_router_bo
         AND 0 < ss.qty_in_queue + ss.qty_in_work
         AND s.sfc = al.sfc
         AND al.operation = op.operation
         AND al.operation_revision = op.revision
         AND op.site= '[Param.1]'
         AND al.operation NOT LIKE '%-W'
    ORDER BY date_time DESC
    3. call another transaction to parse  input data and get output data
    4. parse get back data to form a MII xml output Document.
    Thanks!

  • Best Performance in my custom transaction

    Hi,
    I want ask a thing:
    I'm trying to entry some materials via Purchase Order and i have simulate a inbound delivery access for custom transaction.
    My operations are:
    Creation of Batch
    Split Position with Batch
    Pack Furniture
    Create OT for Forniture
    EM of Forniture
    The first 3 operation are Batch-Input of vl32n.
    Between each operation I have insert a "WAIT UP to 4 second" to not raise exception but my transaction go very slow.
    IF i go down the time of wait up the process doesn't work.
    How I can improve the performance?
    Thanks to all.

    WAIT UP to 4 second" to not raise exception
    Why is that?  You shouldn't use the WAIT or an ENQUEUE_SLEEP call unconditionally.  Yuo don't know if it's necessary or even that the lag time is long enough.  Check for the lock release using the enqueue function in a short DO loop.  Calling 'WAIT' that many times is not very good either - each time you're forcing a roll in/roll out.

  • Performance problem in custom transaction

    Hi,
    We have a custom transaction for checking the roles related to an transaction.
    We have the same copy of the program in DEE, QAS and PRD(obvious).
    The catch is in DEE the transaction runs very slow, whereas in QAS and PRD the transaction is normal.
    In the debugger mode i found in DEE while accessing AGR_1016 it takes an long time.
    I checked the indexes, tables and extents and noticed that in DEE the extents, size and the table size(physical size) is more than in PRD.
    Whereas, the records are more in PRD and less in DEE.
    Please guide me on what all checks i can perform to find out where exactly the problem is?
    Regards,
    Siddhartha.

    Hi,
    What database are you running? It sounds like that's where the issue is, if it's a table with a lot of changes a table reorg might be in order (especially with older databases). You should probably close this message and open another one in the relevant DB forum.
    Michael

  • Slow database insert condition

    I have a situation were we have multiple sessions inserting data in a partitioned table. At any given time we could have multiple sessions. The problem we are facing is that for some reason the insert is taking really logn(say 10 sec or greater) for some transactions. Most of the time the inserts are sub second transactions. This slow insert condition has now pattern. The only thing i have noticed is that this condition occurs where that are a lot of sessions open(50-60)
    My question is what i can do in a 9i instance that would help me pinpoint why a session is taking this long to insert. These are simple insert statements and there is no table locking involved.
    Your help is gretly appreciated.

    Since i have about 50-60 sessions doing insert and this is a very busy system. The problem might not occur for 10 days, so, session tracing seems impractical.
    I am looking for a solution where the DB does some internal recording. If the insert takes 10 seconds or more then the DB should record some stats. Is there anything like that.

  • Approval Procedures & messages slow after upgrade to PL2007A

    Hi All
    After upgrading to SBO 2007A  it takes very long for approval procedures to come through altough all users are set to refress messages every 1 min.
    Also the Pick and Pack  is extreamly slow. The customer is complaining any ideas how I can improve their situation?
    Regards
    Erika Boshoff

    We all like to be informed ASAP.  That is why 1 minute setup seems logical.  However, from database performance point of view, that will result many unnecessary transactions which may slow down your system.
    Change it to 10 minutes to see if it helps.

  • ABAP Runtime error when doing component assignment in routing creation.

    Hi All,
    I have a problem in component assignment while creating routing (CA01) by using Copy from function. We are creating a Sale order specific routing for a finished material by copying it from the same material. The Transaction runs very slow and when it goes to component assignment it is taking more than 2 hrs and going to dump.
    This finished material is having  very high no. of compoents in its BOM. (14000). Pls suggest why this is happening.
    Regds
    Mahesh

    Hi,
    You can analysis the Dump error using t-code ST22 with the help of your ABAP consultant. He will be able to explain you why the dump you are getting.
    Regards,
    V. Suresh

  • After update  iPhone 5c to iOS 8 following problem occurs?

    My iPhone 5c working and responding properly in iOS 7.1.2 in fact I am very happy with the speed of cell but all happiness gone into tear when I update to iOS 8 as after update the below problem arises
    • Notification centre is not responding when I pull it from up to lower.
    • Control centre is having the same issue it also not opening.
    • Speed of transactions is extremely slow in fact it lags sometime.
    • Some apps automatically crashes and come to home screen the including  itself.
    • Sometimes screen frizz and it redirected to  logo and get restarted.
    • In multitasking the cell becomes very slow and I am not getting proper response as before.
    I am not going to blame  that why they launch update to iOS 8? Is it a trick for people to buy iPhone 6?
    Ans~ is no infant they are wiser than us so whatever they do is absolutely right.
    I know my problems were solve by  but if anyone knows the solutions to above issue help me otherwise thanks to  as I am a customer of such a company.
    NOTE- I alread have assistive touch activated.

    HI. I updated my ip5 to ios8. Now all apps very slow running. Especially facebook app

Maybe you are looking for

  • How to refresh the data in published HTML?

    Hi All, I have created a Process Engineering diagram, whcih represents flow of material from Equipment to another equipment. I have binded the diagram to data from the data base and saved/published as HTML to view it in browser. So far it is fine. Bu

  • How can I change my iTunes Account Name to something different from my Billing Information Name?

    I have an iTunes account using a relative's credit card. As a result, my iTunes Account Name is changed to her Name in the Billing Information. Can I change my iTunes Account name to something different from the Billing Information Name?

  • Multiple device management

    I installed CC on my and my wife's home workstations months ago.  Today, I installed Photoshop on my new laptop while at work and, not surprisingly, received a message at launch about my two-device limit having already been reached.  The options pres

  • HP Office Jet Pro 8600 all in one

    I fax documents from word with my printer. Every once in a while, but more frequently as of late, the document will fax, shows send in the log, I hear the the fax completely send and hang up, but then it will fax it over and over, even though I only

  • Crash when viewing rejects

    I have about 1500 photos in the reject pile. When I try to view 'rejected' Aperture crashes. Since the photos are referenced, when I unmount the drive the photos are on, i can view 'rejected' just fine, but i can't delete anything without having to g