COMMIT WORK - performance problem

Dear Fellow SDNers,
I seek your help on the following problem:
Scenario : Inbound idoc which updates an Outbound delivery with Picked quantity, posts the goods issue and then creates billing document
Approach : I am using the function module SD_DELIVERY_UPDATE_PICKING to update the delivery from the idoc data and to post goods issue. Thereafter, i use BAPI_BILLINGDOC_CREATEMULTIPLE to create the billing document. Before calling this BAPI, I use a COMMIT WORK statement to update the relevant tables so as to enable invoice creation properly.
Problem: The COMMIT WORK statement takes a lot of time to execute (I have no update tasks that could lead to this), so much that the idoc (probably) has a timeout and ends up in status 64. As a result, the succeeding part of the code (after COMMIT WORK) is not executed and the billing document is not created.
When I debug this, the COMMIT WORK statement leads to a strange screen (which looks like a blank report output screen, with its title as "UPDATE CONTROL". However (of course), there is no timeout while debugging and the billing document is successfully created.
Could anyone provide some pointers to solve this problem?
regards,
Priyank

i have a custom function module Y_IDOC_INPUT_WMSPICK001 which is responsible for idoc inbound processing. SAP PI sends the inbound data to ECC and once this is done, this function module is executed.
This FM has the following code sequence inside it...
1) Call the FM SD_DELIVERY_UPDATE_PICKING
2) COMMIT WORK AND WAIT.
3) Call the BAPI_BILLINGDOC_CREATEMULTIPLE
Step1 is successfully executed,  the step 2 takes a long time, and after that, the step 3 is not executed at all and the idoc ends up with a yellow light (status 64).
hope it clarifies what I am doing
regards,
Priyank

Similar Messages

  • Commit work and roll back with simple language and simple example

    hi guru
    commit work and roll back with simple language and simple example

    Hi,
    The statement COMMIT WORK completes the current SAP LUW and opens a new one, storing all change requests for the current SAP LUW in the process. In this case, COMMIT WORK performs the following actions:
    It executes all subroutines registered using PERFORM ON COMMIT.
    The sequence is based on the order of registration or according to the priority specified using the LEVEL addition. Execution of the following statements is not permitted in a subroutine of this type:
    PERFORM ... ON COMMIT|ROLLBACK
    COMMIT WORK
    ROLLBACK WORK
    The statement CALL FUNCTION ... IN UPDATE TASK can be executed.
    ROLL BACK:
    The statement ROLLBACK WORK closes the current SAP-LUW and opens a new one. In doing so, all change requests of the current SAP-LUW are canceled. To do this, ROLLBACK WORK carries out the following actions:
    1) Executes all subprograms registered with PERFORM ON ROLLBACK.
    2) Deletes all subprograms registered with PERFORM ON COMMIT.
    3) Raises an internal exception in the Object Services that makes sure that the attributes of persistent objects are initialised.
    4) Deletes all update function modules registered with CALL FUNCTION ...IN UPDATE TASK from the VBLOG database table and deletes all transactional remote Function Calls registered with CALL FUNCTION ... IN BACKGROUND TASK from database tables ARFCSSTATE and ARFCSDATA.
    5) Removal of all SAP locks set in the current program in which the formal parameter _SCOPE of the lock function module was set to the value 2.
    6) Triggers a database rollback, which also ends the current database-LUW.

  • BCS: Performance Issue - Most of the time is spent doing commit work.

    Hello,
    We are experiencing performance issues with our BI server. This performance issue can been seen during our BCS runs. Our DBA has indicated that he sees a very high percentage of time time is spent doing "commit work".
    Curretly, we are running BI7 nw2004s, Basis 700, support pack 14.
    Anyone else experience this? As the BCS run is mainly stanard SAP code, I was wondering if there may be some snotes that correct this?
    Thank you for any help you could provide us.

    If it is related to SEM-BCS and new EHP releases, then there are still big problems with the monitor and tasks status management (meaning problems with performance). Is it the case? If yes, then you'd better  look for already released notes reg this and formulate your own OSSs if you don't find anything relevant.

  • Commit work statement creating problem in CALL TRANSACTION

    Hi Friends,
    we are facing a problem where we need to call a standard program from a  Zprogram.
    we have tried the following ways.
    1. Using SUBMIT statement , we are sending the parameters to the standard program, but when the standard program gives a  
        dump our program cannot process further records. we are calling the submit statment in a loop. As because the standard
        program is giving dump our program is not able to process the next record in the loop and this should not be the case.
        To avoid this we have used the second method.
    2. we used a CALL transaction, we have created a Tcode for the standard program and called this transaction in the calling
        program. We are passing the parameters for the  standard program via BDC table. this works fine even when  the standard
        program gives a dump but when ever the control   comes across a Commit work statement the the control comes back to our
       calling program with out executing the rest of the   statments after the commit.
    now our concern is even though there is an commit work statement , statements after the commit work should also get executed in call transaction. Is thee any way?
    Regards,
    Sravan

    Hi All,
    I got the solution
    DATA: ctu_parameters TYPE ctu_params.
    ctu_parameters-dismode = 'E'.
    ctu_parameters-updmode = 'A'.
    ctu_parameters-racommit = 'X'. "No abortion by COMMIT WORK
    CALL TRANSACTION USING itab_bdcdata OPTIONS FROM ctu_parameters.
    the above code will work even if there is an commit work . This might help some others.

  • Commit work in FQevents in FICA(PERFORM commitroutine ON COMMIT )

    Hello Experts,
    i am trying to create an event to trigger a workflow using function module swe_event_create.
    i am doing this in an FICA event 5500 after triggering this workflow i need to stop the further processing so i am using Error message statement.
    when i am calling swe_event_create without commit work the event is not getting triggered .
    when i checked the documentation of this event it was written that
    To ensure the consistency of the system, note that you must not use the following language elements in events:
    COMMIT WORK
    ROLLBACK WORK
    CALL FUNCTION 'DEQUEUE ALL'
    Deletion of locks that you have not set yourself.
    If you update additional data in an event and use the construction PERFORM commitroutine ON COMMIT to do this, note that:
    At the end of the commitroutine, all internal tables from which data was updated must be initialized again to prevent a duplicate update in the next call.
    A PERFORM rollbacktroutine ON ROLLBACK must also be called. In the rollbackroutine initialize the same data that is initialized at the end of the commitroutine.
    If you want to carry out checks in an event, when you issue messages, note that background processing of the process terminates with warning messages. You should therefore avoid issuing warning messages if possible. However, you should definitely issue warning messages if the value of SY-BATCH is initial.
    how i can use PERFORM commitroutine ON COMMIT could you please paste the code for this
    also plz tell me
    why my event is not getting generated without commit work . do we have any better way to do it

    Hi Anit,
      The FM SWE_EVENT_CREATE does its job, only when 'COMMIT WORK' is executed, after it. Now, as per the general programming guidelines (quoted in your question), you can't write COMMIT WORK in your code. You shouldn't, because it would write half baked document into database. Something that's undesirable. The workaround prescribed in the event documentation (again, as quoted in your question) allows to achieve the goal in following manner-
    1. Do all calculations in your event and put the final values - that are necessary for the workflow - in global variables. Refer to the ABAP documentation for PERFORM ... ON COMMIT for choosing global variables over parameter passing.
    2. Once that's done, make the call to the FM, as given below-
    PERFORM start_wf ON COMMIT.   "Within the FM implementing the event 5500.
    *&      Form  start_wf
    *       The form routine to initiate the workflow
    FORM start_wf.
      CALL FUNCTION 'SWE_EVENT_CREATE'
        EXPORTING
          objtype           = objtype
          objkey            = objkey
          event             = event
        TABLES
          event_container   = event_container
        EXCEPTIONS
          objtype_not_found = 1
          OTHERS            = 2.
      IF sy-subrc <> 0.
        RETURN.
      ENDIF.
      CLEAR: objtype.
      CLEAR: objkey.
      CLEAR: event.
      CLEAR: event_container.
      "And all other global variables that are used in your call for this FM
    ENDFORM.                    "start_wf
    3. Note that you need to clean up the global variable set up in step 1 (also mentioned in the event documentation) . It is to ensure that some other call to the same FM doesn't use those values.
    You needn't issue a COMMIT WORK statement anywhere in your code written in event 5500 implementation. The standard FMs, that update the SAP tables with document information have COMMIT WORK in them. As you have registered the FM 'start_wf' by PERFORM ... ON COMMIT, it would be executed along with the database update triggered by standard FM.

  • Problem with COMMIT WORKS command

    Hello all,
    I’ve a little problem. It seams that I don’t understand how ‘commit work’ command works. When I’m calling transaction with USING parameter:
    CALL TRANSACTION 'FPSA' USING itab_bdcdata.
    When inside of that transaction ‘Commit works’ occur, than commit is being executed and transaction is ending which is not good, because there is still code left that should be executed.
    When I run transaction without USING
    CALL TRANSACTION 'FPSA'
    Everything works fine. I’ve tried using UPDATE addition in CALL FUNCTION, but without result. Does anybody know what is the problem?

    This is meant to work exactly as you described:
    A transaction called with CALL TRANSACTION USING returns right after COMMIT occurs - if you don't provide any additional options.
    There is a way to make such transaction continue with the code after the commit. You just have to use "OPTIONS FROM" addition of the "CALL TRANSACTION" statement - parameter RACOMMIT.
    For more information put the cursor on CALL TRANSACTION statement in your abap code and press F1.
    regards
    good luck

  • BAPI_REQUISITION_CREATE Problem with COMMIT WORK AND WAIT

    I ran into an issue and wanted to get your opinions about what might be happening.
    This is my original code that did not work correctly using COMMIT WORK AND WAIT. After the select statement, lv_count was zero even though there were PR items that had been created. I'm thinking that the database update was not complete, but why would that be?
        CALL FUNCTION 'BAPI_REQUISITION_CREATE'
          IMPORTING
            number            = lv_number
          TABLES
            requisition_items = gt_reqitem
            return            = gt_return.
    *   Purchase requisition has been created
        IF lv_number IS NOT INITIAL.
          COMMIT WORK AND WAIT.
    *     Get number of items in PR
          SELECT COUNT( DISTINCT bnfpo ) INTO lv_count
            FROM eban
            WHERE banfn = lv_number.
    This is my corrected code that works. I removed the AND WAIT from the commit statement and added a separate wait statement. I now get the correct number of PR items that were created.
        CALL FUNCTION 'BAPI_REQUISITION_CREATE'
          IMPORTING
            number            = lv_number
          TABLES
            requisition_items = gt_reqitem
            return            = gt_return.
    *   Purchase requisition has been created
        IF lv_number IS NOT INITIAL.
          COMMIT WORK.
          WAIT UP TO 1 SECONDS.
    *     Get number of items in PR
          SELECT COUNT( DISTINCT bnfpo ) INTO lv_count
            FROM eban
            WHERE banfn = lv_number.
    Any ideas?
    Brenda

    Brenda Bankert wrote:
    Yes, I was able to see the message in RETURN and no, I am not calling the FM in a loop. My program calls it one time.
    > Brenda
    If you were able to see the message 'Purchase requisition number & created' it means that the macro macro_end is getting executed within which the COMMIT WORK statement is encapsulated.
    I have quickly developed a dirty program to test the behaviour of 'SET UPDATE TASK LOCAL' and it seems to work as expected.
    Everytime I executed the below program with 'SET UPDATE TASK LOCAL', I see the value of lv_count as 1000, however if I execute the below program with out the statement  'SET UPDATE TASK LOCAL' the value of count varies everytime, typically around 10 - 15, however the database is successfully updated with all the 1000 entries, this makes me believe that the 'SET UPDATE TASK LOCAL' statement will infact make the subsequent COMMIT synchronous.
    REPORT zytest.
    DATA:
      ls_zytest    TYPE zytest,
      ls_db_zytest TYPE zytest,
      lv_numc04    TYPE numc4 VALUE 1001,
      lv_count     TYPE i,
      lv_index     TYPE i.
    DO 1000 TIMES.
      lv_index = sy-index.
      CLEAR ls_zytest.
      lv_numc04 = lv_numc04  + 1.
      CONCATENATE 'KF00' lv_numc04 INTO ls_zytest-keyfield1.
      CONCATENATE 'KF00' lv_numc04 INTO ls_zytest-keyfield2.
      SET UPDATE TASK LOCAL.
      CALL FUNCTION 'ZYTEST_BAPI'
        EXPORTING
          is_zytest = ls_zytest.
      SELECT SINGLE * FROM zytest INTO ls_db_zytest WHERE keyfield1 = ls_zytest-keyfield1
                                                      AND keyfield2 = ls_zytest-keyfield2.
      IF sy-subrc IS INITIAL.
        WRITE:/ lv_index, ls_db_zytest-keyfield1,ls_db_zytest-keyfield2.
        lv_count = lv_count + 1.
      ENDIF.
    ENDDO.
    WRITE:/ lv_count.
    FUNCTION zytest_bapi .
    *"*"Local Interface:
    *"  IMPORTING
    *"     REFERENCE(IS_ZYTEST) TYPE  ZYTEST
    DATA: transaction_id LIKE arfctid.
      clear transaction_id.
      macro_start.
      CALL FUNCTION 'ZYTEST_UPDATE' IN UPDATE TASK
        EXPORTING
          is_zytest = is_zytest.
      macro_end.
    ENDFUNCTION.
    FUNCTION ZYTEST_UPDATE.
    *"*"Update Function Module:
    *"*"Local Interface:
    *"  IMPORTING
    *"     VALUE(IS_ZYTEST) TYPE  ZYTEST
    INSERT zytest from is_zytest.
    ENDFUNCTION.
    definition of the database table 'ZYTEST'
    MANDT     MANDT
    KEYFIELD1     CHAR20
    KEYFIELD2     CHAR20
    NONKEYFIELD     CHAR50
    -Rajesh

  • "Problem" with COMMIT WORK AND WAIT

    Hi everyone!
    I'm facing the following situation:
    I have a Z program that calls MIGO, via CALL TRANSACTION. The only thing I do when calling this is filling a few fields, so the user has to complete the  operation before going back to Z program.
    When someone executes MIGO in the normal way, I mean, without a CALL TRANSACTION command, there was a additional step, a modification made in the standard code for someone before. But this part of the code didn't execute when calling MIGO via CALL TRANSCATION.
    What I have found out is that just before this code inserted, there was a COMMIT WORK AND WAIT command, and was exactly in this point that, when using the first aproach, it leaves MIGO.
    Is there a way to execute the entire process in MIGO,  I mean , do not leave it at COMMIT command? I've tried to pass at CALL TRANSACTION command diferent UPDATE parameters (i.e: 'S', 'A' and 'L'), but it didn´t work as well.
    I hope you could get the point!
    Thanks in advance!
    Raphael

    I've just found out something interesting about that.
    When I make a CALL TRANSACTION, without passing a table bdcdata, the transaction doesn't leave at the COMMIT WORK command.
    So,  I think it won't work with the bdcdata!
    Raphael

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • URGENT------MB5B : PERFORMANCE PROBLEM

    Hi,
    We are getting the time out error while running the transaction MB5B. We have posted the same to SAP global support for further analysis, and SAP revrted with note 1005901 to review.
    The note consists of creating the Z table and some Z programs to execute the MB5B without time out error, and SAP has not provided what type logic has to be written and how we can be addressed this.
    Could any one suggest us how can we proceed further.
    Note as been attached for reference.
              Note 1005901 - MB5B: Performance problems
    Note Language: English Version: 3 Validity: Valid from 05.12.2006
    Summary
    Symptom
    o The user starts transaction MB5B, or the respective report
    RM07MLBD, for a very large number of materials or for all materials
    in a plant.
    o The transaction terminates with the ABAP runtime error
    DBIF_RSQL_INVALID_RSQL.
    o The transaction runtime is very long and it terminates with the
    ABAP runtime error TIME_OUT.
    o During the runtime of transaction MB5B, goods movements are posted
    in parallel:
    - The results of transaction MB5B are incorrect.
    - Each run of transaction MB5B returns different results for the
    same combination of "material + plant".
    More Terms
    MB5B, RM07MLBD, runtime, performance, short dump
    Cause and Prerequisites
    The DBIF_RSQL_INVALID_RSQL runtime error may occur if you enter too many
    individual material numbers in the selection screen for the database
    selection.
    The runtime is long because of the way report RM07MLBD works. It reads the
    stocks and values from the material masters first, then the MM documents
    and, in "Valuated Stock" mode, it then reads the respective FI documents.
    If there are many MM and FI documents in the system, the runtimes can be
    very long.
    If goods movements are posted during the runtime of transaction MB5B for
    materials that should also be processed by transaction MB5B, transaction
    MB5B may return incorrect results.
    Example: Transaction MB5B should process 100 materials with 10,000 MM
    documents each. The system takes approximately 1 second to read the
    material master data and it takes approximately 1 hour to read the MM and
    FI documents. A goods movement for a material to be processed is posted
    approximately 10 minutes after you start transaction MB5B. The stock for
    this material before this posting has already been determined. The new MM
    document is also read, however. The stock read before the posting is used
    as the basis for calculating the stocks for the start and end date.
    If you execute transaction MB5B during a time when no goods movements are
    posted, these incorrect results do not occur.
    Solution
    The SAP standard release does not include a solution that allows you to
    process mass data using transaction MB5B. The requirements for transaction
    MB5B are very customer-specific. To allow for these customer-specific
    requirements, we provide the following proposed implementation:
    Implementation proposal:
    o You should call transaction MB5B for only one "material + plant"
    combination at a time.
    o The list outputs for each of these runs are collected and at the
    end of the processing they are prepared for a large list output.
    You need three reports and one database table for this function. You can
    store the lists in the INDX cluster table.
    o Define work database table ZZ_MB5B with the following fields:
    - Material number
    - Plant
    - Valuation area
    - Key field for INDX cluster table
    o The size category of the table should be based on the number of
    entries in material valuation table MBEW.
    Report ZZ_MB5B_PREPARE
    In the first step, this report deletes all existing entries from the
    ZZ_MB5B work table and the INDX cluster table from the last mass data
    processing run of transaction MB5B.
    o The ZZ_MB5B work table is filled in accordance with the selected
    mode of transaction MB5B:
    - Stock type mode = Valuated stock
    - Include one entry in work table ZZ_MB5B for every "material +
    valuation area" combination from table MBEW.
    o Other modes:
    - Include one entry in work table ZZ_MB5B for every "material +
    plant" combination from table MARC
    Furthermore, the new entries in work table ZZ_MB5B are assigned a unique
    22-character string that later serves as a key term for cluster table INDX.
    Report ZZ_MB5B_MONITOR
    This report reads the entries sequentially in work table ZZ_MB5B. Depending
    on the mode of transaction MB5B, a lock is executed as follows:
    o Stock type mode = Valuated stock
    For every "material + valuation area" combination, the system
    determines all "material + plant" combinations. All determined
    "material + plant" combinations are locked.
    o Other modes:
    - Every "material + plant" combination is locked.
    - The entries from the ZZ_MB5B work table can be processed as
    follows only if they have been locked successfully.
    - Start report RM07MLBD for the current "Material + plant"
    combination, or "material + valuation area" combination,
    depending on the required mode.
    - The list created is stored with the generated key term in the
    INDX cluster table.
    - The current entry is deleted from the ZZ_MB5B work table.
    - Database updates are executed with COMMIT WORK AND WAIT.
    - The lock is released.
    - The system reads the next entry in the ZZ_MB5B work table.
    Application
    - The lock ensures that no goods movements can be posted during
    the runtime of the RM07MLBD report for the "material + Plant"
    combination to be processed.
    - You can start several instances of this report at the same
    time. This method ensures that all "material + plant"
    combinations can be processed at the same time.
    - The system takes just a few seconds to process a "material +
    Plant" combination so there is just minimum disruption to
    production operation.
    - This report is started until there are no more entries in the
    ZZ_MB5B work table.
    - If the report terminates or is interrupted, it can be started
    again at any time.
    Report ZZ_MB5B_PRINT
    You can use this report when all combinations of "material + plant", or
    "material + valuation area" from the ZZ_MB5B work table have been
    processed. The report reads the saved lists from the INDX cluster table and
    adds these individual lists to a complete list output.
    Estimated implementation effort
    An experienced ABAP programmer requires an estimated three to five days to
    create the ZZ_MB5B work table and these three reports. You can find a
    similar program as an example in Note 32236: MBMSSQUA.
    If you need support during the implementation, contact your SAP consultant.
    Header Data
    Release Status: Released for Customer
    Released on: 05.12.2006 16:14:11
    Priority: Recommendations/additional info
    Category: Consulting
    Main Component MM-IM-GF-REP IM Reporting (no LIS)
    The note is not release-dependent.     
    Thanks in advance.
    Edited by: Neliea on Jan 9, 2008 10:38 AM
    Edited by: Neliea on Jan 9, 2008 10:39 AM

    before you try any of this try working with database-hints as described in note 921165, 902157, 918992

  • COMMIT WORK: Timing of DB commit and update modules

    Hi all,
    Does anyone know categorically the order of starting the asynchronous update modules (CALL FUNCTION ... IN UPDATE TASK) and the database commit, when a COMMIT WORK is done?
    Does COMMIT WORK:
    - Do PERFORM ... ON COMMIT
    - Start asynchronous update processing
    - Do database commit
    Or does it:
    - Do PERFORM ... ON COMMIT
    - Do database commit
    - Start asynchronous update processing
    My reason for asking is some code (not written by me!) that essentially raises CREATED workflow events in the update task, but performs the corresponding database inserts in the current work process.
    It looks like we are getting the situation that sometimes the table entries do not exist when the update modules exist, which in turn suggests to me that perhaps the asynchronous update modules are started just before the database commit that is done when a COMMIT WORK statement is executed.
    Cheers,
    Scott

    Christian,
    Before the update module execution.  Here's some code to highlight and let's assume it runs in a dialog process. And to everyone else, yes, I know this is a poor way to implement updates!
    INSERT INTO zmyobject VALUES lv_myobject.
    CALL FUNCTION 'SWE_EVENT_CREATE_IN_UPD_TASK'
      IN UPDATE TASK
      EXPORTING
        objtype = 'ZMYOBJECT'
        objkey  =  lv_myobject-key
        event   = 'CREATED'.
    COMMIT WORK.
    So I meant to ask the question of whether we could guarantee that the new record in table ZMYOBJECT would be committed to the database <i>before</i> the update module was executed. It really is a theoretical question though it did not begin as one.
    I do not believe the answer to this problem can be determined by debugging, because given that these two steps occur so close in time to one another, by the time the update module appeared in the debugger, it would be unrealistic to expect that the DB insert performed in the dialog process had not been committed yet.
    I might be wrong, but I really don't think you're going to find ABAP logic embedded in SAPMSSY0 or elsewhere that invokes the update modules or performs a DB commit. Rather, it is my suspicion that the COMMIT WORK statement works like this:
    - Drop into the kernel
    - Does a callback to SAPMSSY0 to execute form %_BEFORE_COMMIT to raise a static OO event
    - Does a callback to SAPMSSY0 to execute form %_COMMIT and thereby process any PERFORM ... ON COMMIT
    - Does a database commit
    - Does a callback to SAPMSSY0 to execute form %_AFTER_COMMIT to raise another static OO event
    Further, I believe that it is the database commit that now makes the queued CALL FUNCTION ... IN UPDATE TASK now visible in VBLOG to other processes.
    Lastly, and this is of course wild speculation, I suspect that it is an update process running somewhere else that detects the new entries in VBLOG and grabs them for processing.
    So, I'm kind of changing my position from earlier to state that I believe the COMMIT WORK statement does not directly trigger the update modules at all, rather it just does the database commit and this makes visible the pending update modules to the dispatcher / update work processes which probably grab the update LUW's on a first come first served basis.
    At least, that is how I would design it :-).
    Cheers,
    Scott

  • Performance problem PRD environment

    Hi Everybody,
    Some moments during the day we are having a performance problem in our PRD environment. The system get slow.
    The CPU get 100% of his usage.
    The most used process are:
    oracle.exe - 35%
    disp+work - 30%
    disp+work - 11%
    disp+work - 05%
    disp+work - 05%
    In SM50 we have the programs running.
    In ST03N we can see the most sequential reads and DB time problems.
    If we do a performance tunning of the programs identified we can reduce the performance problem?
    Anybody know another solution or where can i get more information to see where exactly the problem is?More transactions
    maybe?
    Best Regards,
    Fábio Karnik Tchobnian

    Hi,
    In SM50 we have the programs running.
    At what staus they are like commit , read etc; you also need to check for active JOBs get the details.
    In ST03N we can see the most sequential reads and DB time problems.
    Check for poorest SQL statement from ST04 => SQL cache => remove * => check for total execution time (descending order) => check for latest 3-4 SQL statement with your'e developer to fine tune the programe
    This is just for analysis if this is happening every time then above solution for upgrading CPU is worth than tuning
    Regards;

  • Performance problem while CPU is 80% Idel ?

    Hi,
    My end users are claim for performance problem during execution of batch process.
    As you can see there are 1,745 statement executing each second.
    Awr report shows 98.1% of the time , waits on CPU .
    Also Awr report shows that Host CPU is :79.9% Idel.
    The second wait event shows only 212 seconds waits on db file sequential read.
    Yet , 4 minute in 1 hour period is seems not an issue.
    Please advise
    DB Name         DB Id    Instance     Inst Num Startup Time    Release     RAC
    QERP          xxx        erp                 1 21-Jan-13 15:40 11.2.0.2.0 ; NO
    Host Name        Platform                         CPUs Cores Sockets Memory(GB)
    erptst           HP-UX IA (64-bit)                  16    16       4     127.83
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:     40066 22-Jan-13 20:00:52       207       9.6
      End Snap:     40067 22-Jan-13 21:00:05       210       9.6
       Elapsed:               59.21 (mins)
       DB Time:              189.24 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     8,800M     8,800M  Std Block Size:         8K
               Shared Pool Size:     1,056M     1,056M      Log Buffer:    49,344K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                3.2 ;               0.1 ;      0.00 ;      0.05
           DB CPU(s):                3.1 ;               0.1 ;      0.00 ;      0.05
           Redo size:          604,285.1 ;          27,271.3
       Logical reads:          364,792.3 ;          16,463.0
       Block changes:            3,629.5 ;             163.8
      Physical reads:               21.5 ;               1.0
    Physical writes:               95.3 ;               4.3
          User calls:               68.7 ;               3.1
              Parses:              212.9 ;               9.6
         Hard parses:                0.3 ;               0.0
    W/A MB processed:                1.2 ;               0.1
              Logons:                0.3 ;               0.0
            Executes:            1,745.2 ;              78.8
           Rollbacks:                1.2 ;               0.1
        Transactions:               22.2
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00 ;      Redo NoWait %:  100.00
                Buffer  Hit   %:   99.99 ;   In-memory Sort %:  100.00
                Library Hit   %:   99.95 ;       Soft Parse %:   99.85
             Execute to Parse %:   87.80 ;        Latch Hit %:   99.99
    Parse CPU to Parse Elapsd %:   74.76 ;    % Non-Parse CPU:   99.89
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   75.37 ;  76.85
        % SQL with executions>1:   95.31 ;  85.98
      % Memory for SQL w/exec>1:   90.33 ;  82.84
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    DB CPU                                           11,144          98.1
    db file sequential read              52,714         214      4    1.9 User I/O
    SQL*Net break/reset to client        29,050           6      0     .1 Applicatio
    log file sync                         2,536           6      2     .0 Commit
    buffer busy waits                     4,338           2      1     .0 Concurrenc
    Host CPU (CPUs:   16 Cores:   16 Sockets:    4)
    ~~~~~~~~         Load Average
                   Begin       End     %User   %System      %WIO     %Idle
                    0.34 ;     0.33 ;     19.7 ;      0.4 ;      1.8 ;     79.9

    Nikolay Savvinov wrote:
    if the users are complaining about performance of the batch process, then that's what you should be looking at, not the entire system.I find it strange to see "end users" and "the batch process" in the same sentence (as it was in the first post). "End users" gives me the feeling of a significant number of concurrent sessions with people waiting for results in real time at the far end, while "batch process" carries the image a small number of large scale processes running overnight to prepare the data for the following morning.
    I mention this because my first view of the AWR output was: you've got 16 CPUs, only three in use, virtually no users, and doing very little work, how can the users complain. (One answer, of course, is that the 13 CPUs could be locked out of use as far as Oracle is concerned). On the second read I decided that the "users" had gone home, and the complaint was simply that the batch process wasn't completing in time.
    In this case I think "the entire system" IS "the batch process"
    Determine which stored procedures and/or SQL statements took longer than usual and then find out why. Most likely you'll be able to find
    everything you need in AWR views (DBA_HIST_SQL%) and ASH archive (DBA_HIST_ACTIVE_SESS_HISTORY).
    If the batch process has changed dramatically and recently, then a simple first step might be to look at the current AWR report, find the few most time-consuming SQL statements, and use the awrsqrpt.sql script to find their history of execution plans.
    But I'd also just look at the expensive SQL - bearing in mind, particularly, that there are very few user calls per second, yet many hundred executions per second: it strikes me that there could be quite a lot of PL/SQL going on doing something a little bit expensive many times or some PL/SQL function that calls some SQL that used to be called rarely from an SQL statement but is now (due, perhaps to a change in plan) being called much more frequently - so check SQL Ordered by Executions.
    Regards
    Jonathan Lewis

  • Invalid COMMIT WORK in an update function module.

    Hi,Experts,
         Currently We got a problem for posting PO's GR with t-code MIGO. Our requirement is that can create Delivery by automaitcly after PO's GR is saving. Then I have tailored a program for creating delivery.The program will be triggered in an output type when saving action is done.
    Error info is as below
    "Calling a COMMIT WORK in an update process is not allowed
    because the function modules triggered in a Logical Unit
    of Work cannot then be processed correctly."
    Any help will be appreciated
    Thanks in advance
    Richard Zhou

    Hi Richard,
    First create an implementation for the BADI in se19,
    then in any of the methods put break point and run MIGO
    You can clearly see in the debugger what all values come
    Please note that u have 2 methods
    one works in update task and another as normal function module..to view the documentation of the BADI->go to se18 -> enter the name of the BADI -> Display -> click on documentation to View the purpose and use of the BADI you have mentioned,Please note that all BADIs may not have documentations.............
    Also if u feel that the replies are helping you out please reward points....this way the SDN members will be willing to do a bit more homework for you and help you out...
    To proceed with the BADI
    Please check the documentation below
    Business add-ins when creating a material document
    The enhancement MB_DOCUMENT_BADI has two methods that are called up by the same interface, though at different times. All material document data from the following tables is transferred to this business add-in:
    MKPF (material document header)
    MSEG (material document items)
    VM07M (update data)
    This data can be used in other programs, but cannot be changed.
    The methods differ according to the time at which they are called up:
    The method MB_DOCUMENT_BEFORE_UPDATE is called up before the FI document is created. This means that it is called up even if the program is terminated by an error during the subsequent processing. The update of data in separate tables should always be contained in function modules that are called up with the addition 'in update task'. This ensures that all the data is updated consistently.
    The method MB_DOCUMENT_UPDATE is not carried out until update. This means that all updates are carried out immediately in their own tables and do not have to be contained in 'update task' in function modules. For performance reasons, you should not re-read the tables or carry out any time-consuming routines at this point.
    You should always call up MB_DOCUMENT_BEFORE_UPDATE before MB_DOCUMENT_UPDATE, particularly if time is a critical factor when posting the material documents. The method MB_DOCUMENT_UPDATE is processed after the FI document numbers are called. As a result, no other FI documents can be posted until this document is completely updated.
    Even if the two methods are in the same class, you cannot access the same global fields, as the methods are called up at different times and are therefore carried out in another roll area.
    From the business add-in display, you can go to coding examples for both methods by choosing Goto -> Example coding -> Display
    Note
    The enhancement does not transfer any data to the material document, that is, you cannot change material document data before it is updated.
    If this business add-in is not set up properly, it may result in an inconsistency between the documents and the stocks and between the material documents and the accounting documents. Inconsistencies like these may be caused by the following elements in the business add-in:
    COMMIT WORK
    Remote function call (CALL FUNCTION ... DESTINATION)
    Own updates in document tables or stock tables (for example, update in tables MBEW, MARD, MSEG)
    The unlocking of data (for example, via DEQUEUE_ALL)
    Before the two business add-ins are called up, data is already flagged for the UPDATE. If a COMMIT WORK or a Remote Function Call is transmitted in the enhancement, these are written in the database. If another error occurs after the business add-ins are processed, you cannot carry out a complete ROLL BACK, as the data up to the COMMIT or Remote Function Call has already been written in the database. This can result in an inconsistent status (for example, material document without accounting document), which can only be repaired with considerable cost and effort.
    The business add-ins are not suitable for customer-specific updates in the stock tables, as updates like these destroy the standard stock update.
    Unlocking the data (for example, via DEQUEUE_ALL) is also critical, as the data that is to be updated is no longer protected from updates from external systems, and inconsistencies can result from parallel updates.
    Before you activate an enhancement, check carefully that the business add-in does not contain any critical coding places.
    If data inconsistencies have already occurred in your system as a result of the business add-in, remove the critical coding so that it does not cause any further inconsistencies.
    Regards
    Byju

  • COMMIT WORK on BADI BUPA_GENERAL_UPDATE

    Hi all,
    We're trying to propagate partner functions from the BP to associated business transactions.
    We're using the instruction commit work in the BUPA_GENERAL_UPDATE. The problem comes when we propagate the partner functions more than once in the BP, because the first time works but the second causes a short dump (only in PC-UI, in SAP GUI it works properly).
    We need to commit to unlock the modified opportunities.
    Anyone knows the adecuate method to commit the work in PC-UI for transaction BP?
    Thanks in advance, any help will be appreciated.
    David

    Hi David,
    My first try would be to issue the COMMIT statement inside a Form (<i>commitroutine</i>)and call this form in as
    PERFORM <i>commitroutine</i> ON COMMIT.
    Let me know if it works.
    Hope it helps.
    Thanks, Debasish

Maybe you are looking for

  • Unit Test Validation for Output Ref Cursor Not Working

    Here is the problem: I have a stored procedure as follows: CREATE OR REPLACE PROCEDURE usp_GetEmployee( p_employeeId IN NUMBER, cv_employee OUT Sys_RefCursor ) AS BEGIN OPEN cv_employee FOR SELECT * FROM employees WHERE employee_id=p_employeeid; END

  • Transferring information from G4 tower to Pro

    I have a G4 tower w/ 3 hard drives named Macintosh HD, 60GB Hard Drive and Main HD. The 1st came w/ the computer - the 2nd is on top of the 1st. The third is an ATA drive running off of a Sonnet Temp ATA133 adapter card. In starting up the Pro wanted

  • BAM installation Error

    Hi While i am trying to install BAM Release version 3(10.1.3.1.0) after uninstalling the OracleBAM. I am getting the following error message. "oracle database client installation did not complete successfully please resolve errors in the installactio

  • Mail: opening an eml file by thunderbird takes too much time

    We are storing emails as eml files in a network folder. When we then open those eml files it takes a lot of time, sometimes 20 seconds. This happens also if the file does not contain attachments. I have looked at those files with an editor. The files

  • HELP NEEDED URGENTLY - Account suspended

    I was web browsing when all of a sudden my skype was disconnected.  I have had this account well over 13 years, because of the length of time I have had the account I couldn't remember my password, so I clicked on the forgotten password in skype.  It