GD13 error  COLLECT_OVERFLOW_TYPE_P

Dear All,
When we execute the GD13 standard report then system is showing runtime error of  COLLECT_OVERFLOW_TYPE_P
An internal table field has been defined too small. Some document has beeen mosted with more than 24 length. Please advise how can we rectify add. Also suggest the main table of GD13 which we can provide the data of user.
Thanks & Regards,
Pankaj

Dear All,
When we execute the GD13 standard report then system is showing runtime error of  COLLECT_OVERFLOW_TYPE_P
An internal table field has been defined too small. Some document has beeen mosted with more than 24 length. Please advise how can we rectify add. Also suggest the main table of GD13 which we can provide the data of user.
Thanks & Regards,
Pankaj

Similar Messages

  • ERROR : COLLECT_OVERFLOW_TYPE_P

    hi all,
    i made a report in which i take data from Ztable but problem is that when there are less amount of data in Ztable then its coming correctly but if there are many number of data then its giving run time error
    'COLLECT_OVERFLOW_TYPE_P'. can anyone please help me?
    codes:
    REPORT  zsr2_cpc.
    TYPE-POOLS : slis.
    TABLES : mast, stpo, makt,mard, marc,ekpo,ekko,lfa1,mara,eket,stas,zstas_stpo,t001.
    **DATA : BEGIN OF input_data OCCURS 0,
          material LIKE stpo-idnrk,
          requiredqty LIKE stpo-menge,
          END OF input_data.
    DATA : BEGIN OF input_data OCCURS 0,
           matnr LIKE zsimu-matnr,
           zmenge LIKE zsimu-zmenge,
           END OF input_data.
    DATA : BEGIN OF break_data OCCURS 0,
           material LIKE stpo-idnrk,
           requiredqty(16) TYPE p DECIMALS 3,"like stpo-menge,
           balanceqty(16) TYPE p DECIMALS 3," like stpo-menge,
          alpos LIKE stpo-alpos, "alternate item flag
           END OF break_data.
    DATA : BEGIN OF itab2 OCCURS 0,
           material LIKE stpo-idnrk,
           requiredqty(16) TYPE p DECIMALS 3," like stpo-menge,
          ALPOS like stpo-ALPOS, "alternate item flag
           END OF itab2.
    DATA : BEGIN OF itab OCCURS 0,
           material LIKE stpo-idnrk,
           requiredqty LIKE stpo-menge,
           balanceqty LIKE stpo-menge,
           str_stk LIKE stpo-menge,
           prd_loc_stk LIKE stpo-menge,
           menge LIKE ekpo-menge,
           meins LIKE mara-meins,
           maktx LIKE makt-maktx,
           labst LIKE mard-labst,
           insme LIKE mard-insme,
           beskz LIKE marc-beskz ,
           leadtime LIKE marc-dzeit,
           ebeln LIKE ekpo-ebeln,
           poqty LIKE ekpo-menge,
           bedat LIKE ekko-bedat,
           ekgrp LIKE ekko-ekgrp,
           name1 LIKE lfa1-name1,
           sobsl LIKE marc-sobsl,
           lgpro LIKE marc-lgpro,
          alpos LIKE stpo-alpos, "alternate item flag
           lblab LIKE mslbh-lblab,
           END OF itab.
    DATA : BEGIN OF itab_n OCCURS 0,
          material LIKE stpo-idnrk,
          requiredqty LIKE stpo-menge,
          balanceqty LIKE stpo-menge,
          str_stk LIKE stpo-menge,
          prd_loc_stk LIKE stpo-menge,
          menge LIKE ekpo-menge,
          meins LIKE mara-meins,
          maktx LIKE makt-maktx,
          labst LIKE mard-labst,
          insme LIKE mard-insme,
          beskz LIKE marc-beskz ,
          leadtime LIKE marc-dzeit,
          ebeln LIKE ekpo-ebeln,
          poqty LIKE ekpo-menge,
          bedat LIKE ekko-bedat,
          ekgrp LIKE ekko-ekgrp,
          name1 LIKE lfa1-name1,
          sobsl LIKE marc-sobsl,
          lgpro LIKE marc-lgpro,
          alpos LIKE stpo-alpos, "alternate item flag
          lblab LIKE mslbh-lblab,
          END OF itab_n.
    DATA : qpa TYPE p DECIMALS 3.
    DATA  count TYPE i.
    DATA : count2 TYPE i,
           count3 TYPE i,
           count4 TYPE i,
           qty_labst LIKE mard-labst,
           qty_labst1 LIKE mard-labst,
           qty_insme LIKE mard-insme.
    DATA:   BEGIN OF bet OCCURS 50.
            INCLUDE STRUCTURE ekbe.
    DATA:   END OF bet.
    DATA:   BEGIN OF bzt OCCURS 50.
            INCLUDE STRUCTURE ekbz.
    DATA:   END OF bzt.
    DATA:   BEGIN OF betz OCCURS 50.
            INCLUDE STRUCTURE ekbez.
    DATA:   END OF betz.
    DATA:   BEGIN OF bets OCCURS 50.
            INCLUDE STRUCTURE ekbes.
    DATA:   END OF bets.
    DATA:   BEGIN OF xekbnk OCCURS 10.
            INCLUDE STRUCTURE ekbnk.
    DATA:   END OF xekbnk.
    DATA : w_container TYPE scrfname VALUE 'CL_GRID',
           w_cprog TYPE lvc_s_layo,
           g_repid LIKE sy-repid,
           w_save TYPE c,
           w_exit TYPE c,
           cl_grid TYPE REF TO cl_gui_alv_grid,
           cl_custom_container TYPE REF TO cl_gui_custom_container,
           it_fld_catalog TYPE slis_t_fieldcat_alv,
           wa_fld_catalog TYPE slis_t_fieldcat_alv WITH HEADER LINE ,
           layout TYPE slis_layout_alv,
           col_pos  LIKE sy-cucol ,
           alvfc TYPE slis_t_fieldcat_alv.
    *SELECTION-SCREEN : BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
    *SELECT-OPTIONS   : matnr FOR mast-matnr.  " NO INTERVALS NO-EXTENSION.
    *SELECT-OPTIONS   : menge FOR stpo-menge NO INTERVALS NO-EXTENSION.
    *SELECTION-SCREEN : END OF BLOCK b1.
    PERFORM fill_catalog1 USING:
    'MATERIAL'    'ITAB'    'Material' ,
    'MAKTX'    'ITAB'    'Description' ,
    'MEINS'    'ITAB'    'Unit',
    'REQUIREDQTY'    'ITAB'    'Required Qty',
    'BALANCEQTY'    'ITAB'    'Short Qty',
    'LABST' 'ITAB'    'Total Stock',
    'STR_STK'    'ITAB'    'Store Stock',
    'PRD_LOC_STK'   'ITAB'    'Production loc Stock',
    'POQTY' 'ITAB'    'PO Qty',
    'BEDAT' 'ITAB'    'PO Date',
    'NAME1' 'ITAB'    'Vendor Name',
    'LEADTIME' 'ITAB' 'Lead Time',
    'BESKZ' 'ITAB'    'P Type',
    'EKGRP' 'ITAB'    'P Group'.
    SELECT matnr zmenge INTO TABLE input_data FROM zsimupp  .
    LOOP AT input_data.
      SELECT SINGLE * FROM mast WHERE matnr = input_data-matnr AND stlan = '3'.
    IF sy-subrc <> 0.
       MESSAGE 'DATA NOT FOUND.' TYPE 'I'.
    ENDIF.
      SELECT * FROM zstas_stpo WHERE stlnr = mast-stlnr.
    qpa = zstas_stpo-menge.
        break_data-material = zstas_stpo-idnrk.
        break_data-requiredqty =  input_data-zmenge .
        SELECT SINGLE * FROM mara WHERE matnr = zstas_stpo-idnrk.
        IF ( zstas_stpo-meins = 'G' AND mara-meins = 'KG' ) OR
                       ( zstas_stpo-meins = 'MM' AND mara-meins = 'M' ).
          break_data-requiredqty = break_data-requiredqty / 1000.
        ENDIF.
        APPEND break_data.
        CLEAR break_data.
        CLEAR qpa.
      ENDSELECT.
      LOOP AT break_data.
        itab-material = break_data-material.
        SELECT SINGLE maktx FROM makt INTO itab-maktx WHERE matnr = itab-material.
        SELECT SINGLE meins FROM mara INTO itab-meins WHERE matnr = itab-material.
        itab-requiredqty = break_data-requiredqty.
        SELECT labst INTO qty_labst FROM mard WHERE matnr = break_data-material AND ( lgort <> 'REJS' OR lgort <> 'SCRP' ).
          qty_labst1 = qty_labst1 + qty_labst.
        ENDSELECT.
        itab-labst = qty_labst1.
        IF itab-requiredqty < itab-labst.
          itab-balanceqty = 0.
        ELSE.
          itab-balanceqty = itab-labst - break_data-requiredqty.
        ENDIF.
    itab-balanceqty = qty_labst1 - break_data-requiredqty.
        IF itab-balanceqty < 0.
          itab-balanceqty = itab-balanceqty * ( -1 ).
        ENDIF.
        SELECT SUM( labst ) INTO itab-str_stk FROM mard WHERE matnr = break_data-material
             AND ( lgort = 'RECP' OR lgort = 'MAIN' OR lgort = 'COMP' OR lgort = 'EBSS' OR lgort = 'EDSS' OR lgort = 'ELEC' OR lgort = 'GDNS'
                                                    OR lgort = 'SFNS' OR lgort = 'FNGS' OR lgort = 'SPQA' OR lgort = 'COQA' ).
        itab-prd_loc_stk = itab-labst - itab-str_stk.
    ************for PO, lead time and P group
        SELECT SINGLE * FROM marc WHERE matnr = itab-material.
        IF sy-subrc = 0.
          MOVE marc-beskz TO itab-beskz.
          MOVE marc-sobsl TO itab-sobsl.
          MOVE marc-lgpro TO itab-lgpro.
          IF marc-beskz = 'E'.
            MOVE marc-dzeit TO itab-leadtime.
          ENDIF.
          IF marc-beskz = 'F'.
            MOVE marc-plifz TO itab-leadtime.
            MOVE marc-ekgrp TO itab-ekgrp.
          ENDIF.
        ENDIF.
        IF itab-balanceqty <> 0 AND itab-beskz = 'F'.
          SELECT  * FROM ekpo WHERE matnr = itab-material
                                             AND elikz <> 'X'
                                             AND loekz = ' '.
            CALL FUNCTION 'ME_READ_HISTORY'
              EXPORTING
                ebeln  = ekpo-ebeln
                ebelp  = ekpo-ebelp
                webre  = 'X'
              TABLES
                xekbe  = bet
                xekbz  = bzt
                xekbes = bets
                xekbez = betz
                xekbnk = xekbnk.
            SELECT SINGLE * FROM eket WHERE ebeln = ekpo-ebeln
                                        AND ebelp = ekpo-ebelp.
            itab-poqty = itab-poqty + ( eket-menge - eket-wemng -
                                                          bets-wesbs ).
          ENDSELECT.
          SELECT SINGLE * FROM ekko WHERE ebeln = ekpo-ebeln.
          IF sy-subrc = 0.
            SELECT SINGLE * FROM lfa1 WHERE lifnr = ekko-lifnr.
            IF sy-subrc = 0.
              MOVE ekpo-ebeln TO itab-ebeln.
              MOVE ekko-bedat TO itab-bedat.
              MOVE lfa1-name1 TO itab-name1.
            ENDIF.
          ENDIF.
        ENDIF.
        APPEND itab.
        CLEAR itab.
        qty_labst1 = 0.
        DELETE break_data.
        CLEAR break_data.
        CLEAR ekpo.
        CLEAR ekko.
      ENDLOOP.
    *endselect.
    delete input_data.
    clear input_data.
    ENDLOOP.
    SORT itab BY material.
    LOOP AT itab.
      COLLECT itab INTO itab_n.
    *append itab_n.
    ENDLOOP.
    layout-zebra = 'X' .
    layout-colwidth_optimize(1) = 'X'.
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
      EXPORTING
        i_callback_program = 'ZSR2_CPC'
        is_layout          = layout
        it_fieldcat        = it_fld_catalog
        i_default          = 'X'
        i_save             = 'A'
      TABLES
        t_outtab           = itab_n
      EXCEPTIONS
        program_error      = 1
        OTHERS             = 2.
    IF sy-subrc <> 0.
      MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
              WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    *&      Form  FILL_CATALOG1
          text
         -->P_FIELDNAME  text
         -->P_REF_TABLE  text
         -->P_SCRTEXT    text
    FORM fill_catalog1  USING   p_fieldname TYPE any
                                p_ref_table TYPE any
                                p_scrtext   TYPE any.
      CLEAR : wa_fld_catalog.
      wa_fld_catalog-fieldname  = p_fieldname.
      wa_fld_catalog-tabname    = p_ref_table.
      wa_fld_catalog-seltext_s  = p_scrtext.
      wa_fld_catalog-seltext_m  = p_scrtext.
      wa_fld_catalog-seltext_l  = p_scrtext.
      wa_fld_catalog-outputlen = 15.
    wa_fld_catalog-do_sum  = 'X'.
      APPEND wa_fld_catalog TO it_fld_catalog.
    ENDFORM.                    " fill_catalog1
    regards saurabh.

    hi all,
    thanx for reply, i changed itab but still i m getting same error. can anyone please correct this error?
    changed itab:
    DATA : BEGIN OF itab OCCURS 0,
           material LIKE stpo-idnrk,
           requiredqty type p decimals 2,          "LIKE stpo-menge,
           balanceqty type p decimals 2,           "LIKE stpo-menge,
           str_stk type p decimals 2,              "LIKE stpo-menge,
           prd_loc_stk type p decimals 2,          "LIKE stpo-menge,
           menge type p decimals 2,                      " LIKE ekpo-menge,
           meins LIKE mara-meins,
           maktx LIKE makt-maktx,
           labst type p decimals 2,                   "LIKE mard-labst,
           insme type p decimals 2,                     "LIKE mard-insme,
           beskz LIKE marc-beskz ,
           leadtime LIKE marc-dzeit,
           ebeln LIKE ekpo-ebeln,
           poqty LIKE ekpo-menge,
           bedat LIKE ekko-bedat,
           ekgrp LIKE ekko-ekgrp,
           name1 LIKE lfa1-name1,
           sobsl LIKE marc-sobsl,
           lgpro LIKE marc-lgpro,
          alpos LIKE stpo-alpos, "alternate item flag
           lblab LIKE mslbh-lblab,
           END OF itab.
    DATA : BEGIN OF itab_n OCCURS 0,
           material LIKE stpo-idnrk,
           requiredqty type p decimals 2,          "LIKE stpo-menge,
           balanceqty type p decimals 2,           "LIKE stpo-menge,
           str_stk type p decimals 2,              "LIKE stpo-menge,
           prd_loc_stk type p decimals 2,          "LIKE stpo-menge,
           menge type p decimals 2,                      " LIKE ekpo-menge,
           meins LIKE mara-meins,
           maktx LIKE makt-maktx,
           labst type p decimals 2,                   "LIKE mard-labst,
           insme type p decimals 2,                     "LIKE mard-insme,
           beskz LIKE marc-beskz ,
           leadtime LIKE marc-dzeit,
           ebeln LIKE ekpo-ebeln,
           poqty LIKE ekpo-menge,
           bedat LIKE ekko-bedat,
           ekgrp LIKE ekko-ekgrp,
           name1 LIKE lfa1-name1,
           sobsl LIKE marc-sobsl,
           lgpro LIKE marc-lgpro,
          alpos LIKE stpo-alpos, "alternate item flag
           lblab LIKE mslbh-lblab,
           END OF itab_n.
    can anyone please help me?
    regards saurabh.
    Edited by: saurabh srivastava on Mar 4, 2009 8:32 AM

  • Run time errors  COLLECT_OVERFLOW_TYPE_P

    hi everyone,
    short text of my error:  An internal table field has been defined too small. 
    642      SORT IT_FINAL BY ERDAT MATNR.                                                              
      643      LOOP AT IT_FINAL.                                                                          
      *644      *  move it_final-vbeln to it_final1-vbeln.*                                                 
      645      MOVE IT_FINAL-MATNR TO IT_FINAL1-MATNR.                                                    
      646      MOVE IT_FINAL-ARktx TO IT_FINAL1-arktx.                                                    
      647      MOVE IT_FINAL-KWMENG TO IT_FINAL1-KWMENG.                                                  
      648      MOVE IT_FINAL-nvamt TO IT_FINAL1-nvamt.                                                    
      *649      it_final1-amt = it_final-nvamt / it_final-kwmeng .                                        
    >>>>>      COLLECT IT_FINAL1.                                                                         
      651      ENDLOOP.                                                                               
    *652      *                                                                               
    653      loop at it_final1.
    the error is in line 650.
    the field is nvamt
    how to increase the internal table field value.
    or any other suggestions.
    Moderator message: please try solving yourself before asking, the error text is quite descriptive.
    Locked by: Thomas Zloch on Aug 4, 2010 1:13 PM

    Hi Karthe,
    Please check the declaration of it_final and it_final1. If there is a problem still please post the data definitions here.
    Regards,
    Kiran.

  • PT60 Runtime Error -  COLLECT_OVERFLOW_TYPE_P

    Hi,
    When I run time evaluation for one of the employee, it generated Runtime Error. I use the Debugger to look into details and found that it happen when it "COLLECT CVS" (Main Program - SAPFP51T, Source Code of: RPTMAS04_FUCUMBT). The CVS initially has ANZHL = 99716.00, and it need to collect  511.00 and it cause it overflow(99716 + 511 = 100227). I can't change the data element of ANZHL because it is standard. What shall I do? Thank you.

    Hi
    when you run the debugger try to type  acdate and see.Then it gives you the date on which the problem occured.try to run the TE till that date(previous day) and see.
    then open the CUMBT in log and open SALDO table,I am sure that you will see a houge number in a time type on that date.Then go back and check the rule which has that time type there might be an error.
    If you have any questions pl let me know.Hope it helps!!
    Good Luck

  • GD13 error

    I use GVTR to carry forwad next year, from Jan--Dec in 2008, balance sheet account 3110100 (retained earning, beginning of year) in  the GD13 shows 134998.34 USD.
    But GD13 for year 2009, beginning of Jan, it shows 1197677.23 USD,
    why closing balance is not equal to beginning bal? thanks

    I re-run GVTR, only 1197677.23 USD is carry forward

  • COLLECT_OVERFLOW_TYPE_P  occured,collecting amount in GLT3 table ksl07 fied

    Runtime Errors         COLLECT_OVERFLOW_TYPE_P
    Except.                CX_SY_ARITHMETIC_OVERFLOW
    Date and Time          27.07.2009 18:31:32
    Short text
         An internal table field has been defined too small.
    Error analysis
         An exception occurred that is explained in detail below.
         The exception, which is assigned to class 'CX_SY_ARITHMETIC_OVERFLOW', was not
          caught in
      1506     ASSIGN TAB_GLT3-KSLVT INCREMENT INT_GLS3_ADD-OFFSET TO <FELD>
      1507                           RANGE TAB_GLT3 CASTING TYPE P.
      1508     MOVE INT_GLS3-KSL TO <FELD> .
    >>>>>     COLLECT TAB_GLT3 .
      1510   ENDLOOP.
      1511
      1512
      1513   IF SY-ONCOM = 'P'.
      1514     PERFORM NEW_UPDATE_GLT3 .
    ABOVE EROOR clearly says  that The name of the field is "KSL07" and TABLE is GLT3 is exceeding ,
    but we checked the file as well table , their is no large value in SAP . file is uploading using RFBIBL00 program
    Please advise why this type of errors occurs.
    Thanks

    HI,
    The amount which is getting calculated is too big to be accomodated in the field KSL07.
    Check whether the decimal places are being considered properly.
    Regards,
    Ankur Parab

  • About Collect Statement.

    hi experts,
         i am genereating a report in FICO module. i have to fetch teh data from the field DMBTR (AMOUNT). it has both credits anddebits. i have to do calculations and should display the final balance in the report. each GL account number has different credits and debits. i have to display each account with the balance. if i am using collect statement it only making sum of eitehr debits or credits. i want both to be sum up. is tehre any other way to do the same..thnx in advance,
                                   santosh.

    Syntax Diagram
    COLLECT
    Syntax
    COLLECT wa INTO itab [result].
    Effect
    This statement inserts the contents of a work area wa either as single row into an internal table itab or adds the values of its numeric components to the corresponding values of existing rows with the same key. As of Release 6.10, you can use result to set a reference to the inserted or changed row in the form of a field symbol or data reference.
    Prerequisite for the use of this statement is that wa is compatible with the row type of itab and all components that are not part of the table key must have a numeric data type (i, p, f).
    In standard tables that are only filled using COLLECT, the entry is determined by a temporarily created hash administration. The workload is independent of the number of entries in the table. The hash administration is temporary and is generally invalidated when the table is accessed for changing. If further COLLECT statements are entered after an invalidation, a linear search of all table rows is performed. The workload for this search increases in a linear fashion in relation to the number of entries.
    In sorted tables, the entry is determined using a binary search. The workload has a logarithmic relationship to the number of entries in the table.
    In hashed tables, the entry is determined using the hash administration of the table and is always independent of the number of table entries.
    If no line is found with an identical key, a row is inserted as described below, and filled with the content of wa:
    In standard tables the line is appended.
    In sorted tables, the new line is inserted in the sort sequence of the internal table according to its key values, and the table index of subsequent rows is increased by 1.
    In hashed tables, the new row is inserted into the internal table by the hash administration, according to its key values.
    If the internal table already contains one or more rows with an identical key, those values of the components of work area wa that are not part of the key, are added to the corresponding components of the uppermost existing row (in the case of index tables, this is the row with the lowest table index).
    The COLLECT statement sets sy-tabix to the table index of the inserted or existing row, in the case of standard tables and sorted tables, and to the value 0 in the case of hashed tables.
    Outside of classes, you can omit wa INTO if the internal table has an identically-named header line itab. The statement then implicitly uses the header line as the work area.
    COLLECT should only be used if you want to create an internal table that is genuinely unique or compressed. In this case, COLLECT can greatly benefit performance. If uniqueness or compression are not required, or the uniqueness is guaranteed for other reasons, the INSERT statement should be used instead.
    The use of COLLECT for standard tables is obsolete. COLLECT should primarily be used for hashed tables, as these have a unique table key and a stable hash administration.
    If a standard table is filled using COLLECT, it should not be edited using any other statement with the exception of MODIFY. If the latter is used with the addition TRANSPORTING, you must ensure that no key fields are changed. This is the only way to guarantee that the table entries are always unique and compressed, and that the COLLECT statement functions correctly and benefits performance. The function module ABL_TABLE_HASH_STATE can be used to check whether a standard table is suitable for editing using COLLECT.
    Example
    Compressed insertion of data from the database table sflight into the internal table seats_tab. The rows in which the key components carrid and connid are identical are compressed by adding the number of occupied seats to the numeric component seatsocc.
    DATA: BEGIN OF seats,
            carrid   TYPE sflight-carrid,
            connid   TYPE sflight-connid,
            seatsocc TYPE sflight-seatsocc,
          END OF seats.
    DATA seats_tab LIKE HASHED TABLE OF seats
                   WITH UNIQUE KEY carrid connid.
    SELECT carrid connid seatsocc
           FROM sflight
           INTO seats.
      COLLECT seats INTO seats_tab.
    ENDSELECT.
    Exceptions
    Catchable Exceptions
    CX_SY_ARITHMETIC_OVERFLOW
    Cause: Overflow in integer field during totals formation
    Runtime Error: COLLECT_OVERFLOW
    Cause: Overflow in type p field during totals formation
    Runtime Error: COLLECT_OVERFLOW_TYPE_P
    Non-Catchable Exceptions
    Cause: COLLECT used for non-numeric fields
    Runtime Error: TABLE_COLLECT_CHAR_IN_FUNCTION

  • Collect Overflow Type P - Patching? Time evaluation?

    Hi Experts!
    I have an issue and I need your help.
    The client is in the process of patching Development, QA and Production. They have patched the DEV and QA clients and have yet to complete production.
    Testing in QA of time evaluation forced retro’d back a couple of years causes the following error (Collect_Overflow_Type_P) :
    The ABAP’er came back with this:
    I have processed time evaluation (with the log) at specific points in time to try and establish why the SALDO/TES tables are keeling over but I am really struggling to get to the bottom of this.
    The issue is not present in production, I am concerned that if production is patched and this issue remains, there will be significant issues for payroll!
    Is it a patching issue?
    Is it relating to note 1879092
    What is SALDO_NEW and is it something to do with this?
    Any help would be greatly appreciated!
    Will

    Hi Inge,
    I have had the same issue as you in the past so thats where I started investigating. None of the time types come anywhere near the limits... Thanks for your reply, its a great suggestion!
    The ABAP'er is checking with the Basis Team  whether note 1942144 has been applied.
    Will

  • Internal Table Group By

    Hi everyone,
      I got a FM with a table parameter and i want to perform some "group by" operation(like sql group by) on it. Any suggestion??
    Regards,
    Kit

    COLLECT
    Syntax
    COLLECT wa INTO itab [result].
    Effect
    This statement inserts the contents of a work area wa either as single row into an internal table itab or <b>adds the values of its numeric components to the corresponding values of existing rows with the same key</b>. As of Release 6.10, you can use result to set a reference to the inserted or changed row in the form of a field symbol or data reference.
    <b>Prerequisite for the use of this statement is that wa is compatible with the row type of itab and all components that are not part of the table key must have a numeric data type (i, p, f).</b>
    In standard tables that are only filled using COLLECT, the entry is determined by a temporarily created hash administration. The workload is independent of the number of entries in the table. The hash administration is temporary and is generally invalidated when the table is accessed for changing. If further COLLECT statements are entered after an invalidation, a linear search of all table rows is performed. The workload for this search increases in a linear fashion in relation to the number of entries.
    In sorted tables, the entry is determined using a binary search. The workload has a logarithmic relationship to the number of entries in the table.
    In hashed tables, the entry is determined using the hash administration of the table and is always independent of the number of table entries.
    If no line is found with an identical key, a row is inserted as described below, and filled with the content of wa:
    In standard tables the line is appended.
    In sorted tables, the new line is inserted in the sort sequence of the internal table according to its key values, and the table index of subsequent rows is increased by 1.
    In hashed tables, the new row is inserted into the internal table by the hash administration, according to its key values.
    If the internal table already contains one or more rows with an identical key, those values of the components of work area wa that are not part of the key, are added to the corresponding components of the uppermost existing row (in the case of index tables, this is the row with the lowest table index).
    The COLLECT statement sets sy-tabix to the table index of the inserted or existing row, in the case of standard tables and sorted tables, and to the value 0 in the case of hashed tables.
    Outside of classes, you can omit wa INTO if the internal table has an identically-named header line itab. The statement then implicitly uses the header line as the work area.
    COLLECT should only be used if you want to create an internal table that is genuinely unique or compressed. In this case, COLLECT can greatly benefit performance. If uniqueness or compression are not required, or the uniqueness is guaranteed for other reasons, the INSERT statement should be used instead.
    The use of COLLECT for standard tables is obsolete. COLLECT should primarily be used for hashed tables, as these have a unique table key and a stable hash administration.
    If a standard table is filled using COLLECT, it should not be edited using any other statement with the exception of MODIFY. If the latter is used with the addition TRANSPORTING, you must ensure that no key fields are changed. This is the only way to guarantee that the table entries are always unique and compressed, and that the COLLECT statement functions correctly and benefits performance. The function module ABL_TABLE_HASH_STATE can be used to check whether a standard table is suitable for editing using COLLECT.
    Example
    Compressed insertion of data from the database table sflight into the internal table seats_tab. The rows in which the key components carrid and connid are identical are compressed by adding the number of occupied seats to the numeric component seatsocc.
    DATA: BEGIN OF seats,
            carrid   TYPE sflight-carrid,
            connid   TYPE sflight-connid,
            seatsocc TYPE sflight-seatsocc,
          END OF seats.
    DATA seats_tab LIKE HASHED TABLE OF seats
                   WITH UNIQUE KEY carrid connid.
    SELECT carrid connid seatsocc
           FROM sflight
           INTO seats.
      COLLECT seats INTO seats_tab.
    ENDSELECT.
    Exceptions
    Catchable Exceptions
    CX_SY_ARITHMETIC_OVERFLOW
    Cause: Overflow in integer field during totals formation
    Runtime Error: COLLECT_OVERFLOW
    Cause: Overflow in type p field during totals formation
    Runtime Error: COLLECT_OVERFLOW_TYPE_P
    Non-Catchable Exceptions
    Cause: COLLECT used for non-numeric fields
    Runtime Error: TABLE_COLLECT_CHAR_IN_FUNCTION

  • How to cummuale the quantity

    hi
    how can i cummulate the quantity from multile records into one record from unique fields Plant,matnr,lgort.
    please help me

    Syntax Diagram
    COLLECT
    Basic form
    COLLECT [wa INTO] itab.
    Extras:
    1. ... ASSIGNING <fs>
    2. ... REFERENCE INTO dref
    3. ... SORTED BY f
    The syntax check performed in an ABAP Objects context is stricter than in other ABAP areas. See Cannot Use Short Forms in Line Operations.
    Effect
    COLLECT allows you to create unique or summarized datasets. The system first tries to find a table entry corresponding to the table key. (See also Defining Keys for Internal Tables). The key values are taken either from the header line of the internal table itab, or from the explicitly-specified work area wa. The line type of itab must be flat - that is, it cannot itself contain any internal tables. All the components that do not belong to the key must be numeric types ( ABAP Numeric Types).
    If the system finds an entry, the numeric fields that are not part of the table key (see ABAPNumeric Types) are added to the sum total of the existing entries. If it does not find an entry, the system creates a new entry instead.
    The way in which the system finds the entries depends on the kind of the internal table:
    STANDARD TABLE:
    The system creates a temporary hash administration for the table to find the entries. This means that the runtime required to find them does not depend on the number of table entries. The administration is temporary, since it is invalidated by operations (such as DELETE, INSERT, MODIFY, or SORT). A subsequent COLLECT is then no longer independent of the table size, because the system has to use a linear search to find entries. For this reason, you should only use COLLECT to fill standard tables.
    SORTED TABLE:
    The system uses a binary search to find the entries. There is a logarithmic relationship between the number of table entries and the search time.
    HASHED TABLE:
    The system uses the internal hash administration of the table to find records. Since (unlike standard tables), this remains intact even after table modification operations, the search time is always independent of the number of table entries.
    For standard tables and SORTED TABLEs, the system field SY-TABIX contains the number of the existing or newly-added table entry after the COLLECT. With HASHED TABLEs, SY-TABIX is set to 0.
    Notes
    COLLECT allows you to create a unique or summarized dataset, and you should only use it when this is necessary. If neither of these characteristics are required, or where the nature of the table in the application means that it is impossible for duplicate entries to occur, you should use INSERT [wa INTO] TABLE itab instead of COLLECT. If you do need the table to be unique or summarized, COLLECT is the most efficient way to achieve it.
    If you use COLLECT with a work area, the work area must be compatible with the line type of the internal table.
    If you edit a standard table using COLLECT, you should only use the COLLECT or MODIFY ... TRANSPORTING f1 f2 ... statements (where none of f1, f2, ... may be in the key). Only then can you be sure that:
    -The internal table actually is unique or summarized
    -COLLECT runs efficiently. The check whether the dataset
    already contains an entry with the same key has a constant
    search time (hash procedure).
    If you use any other table modification statements, the check for entries in the dataset with the same key can only run using a linear search (and will accordingly take longer). You can use the function module ABL_TABLE_HASH_STATE to test whether the COLLECT has a constant or linear search time for a given standard table.
    Example
    Summarized sales figures by company:
    TYPES: BEGIN OF COMPANY,
            NAME(20) TYPE C,
            SALES    TYPE I,
          END OF COMPANY.
    DATA: COMP    TYPE COMPANY,
          COMPTAB TYPE HASHED TABLE OF COMPANY
                                    WITH UNIQUE KEY NAME.
    COMP-NAME = 'Duck'.  COMP-SALES = 10. COLLECT COMP INTO COMPTAB.
    COMP-NAME = 'Tiger'. COMP-SALES = 20. COLLECT COMP INTO COMPTAB.
    COMP-NAME = 'Duck'.  COMP-SALES = 30. COLLECT COMP INTO COMPTAB.
    Table COMPTAB now has the following contents:
              NAME    | SALES
              Duck    |   40
              Tiger   |   20
    Addition 1
    ... ASSIGNING <fs>
    Effect
    If this statement is successfully executed, the field symbol <fs> is set to the changed or new entry. Otherwise the field symbol remains unchanged.
    Addition 2
    ... REFERENCE INTO dref
    Effect
    If this statement is successfully executed the reference to the relevant line is placed in dref. Otherwise the data reference dref remains unchanged.
    Addition 3
    ... SORTED BY f
    Effect
    COLLECT ... SORTED BY f is obsolete, and should no longer be used. It only applies to standard tables, and has the same function as APPEND ... SORTED BY f, which you should use instead. (See also Obsolete Language Elements).
    Note
    Performance:
    If you are still using internal tables with headers but, as recommended, keep your data in work areas with a different name, you do not need to assign the data to the header first in order to pass it to the internal tables. Instead, you should use the work area directly as with tables without headers. For example, "APPEND wa TO itab." is roughly twice as fast as "itab = wa. APPEND itab.". The same applies to COLLECT and INSERT.
    The runtime of a COLLECT increases with the width of the table key and the number of numeric fields whose contents are summated.
    Exceptions
    Catchable Exceptions
    CX_SY_ARITHMETIC_OVERFLOW
    Cause: Overflow in the integer field when forming totals
    Runtime Error: COLLECT_OVERFLOW
    Cause: overflow in type P field when forming totals
    Runtime Error: COLLECT_OVERFLOW_TYPE_P
    Non-Catchable Exceptions
    Cause: COLLECT on non-numeric fileds
    Runtime Error: TABLE_COLLECT_CHAR_IN_FUNCTION
    Related
    APPEND, WRITE ... TO, MODIFY, INSERT
    Additional help
    Inserting Summarized Table Lines

  • GD13 Report Error

    Hi Friends,
    Some Documents is not updated in Special Purpose Ledger GD13 Report but it is showing in FS10N
    There was some problem in customization for materil document which was changed now it is updating all the records.
    My quistion is how to prefix all old documents which was not updated in GD13 report.
    Please reply asap.
    Thanks in advance.
    JD

    HI,
    guess you want to get all GL document-numbers that are not posted to your special ledger to repost it? I had a similar case and made a query that delivers a "document flow" for accounting documents (like the doc. lfow in logistics) based on table bkpf and added the BAPI "BAPI_ACC_DOCUMENT_RECORD". Thus for every FI document I got the info whether there is an exisitng follow-on accounting document or not. I filtered all the Fi doc's with no "follow-on" document and reposted them.
    Best regards, Christian

  • Error in report  - monthly journal entries ?

    HI Gurus
    I am runnning a custom report in the back ground session. The error says that ABAP/4 Processor : COLLECT_OVERFLOW_TYPE_P in the Job log. When i docuble cliek this message it says that  An internal table field has been defined too small. What exactly the problem could be ? Is it the input parameters problem ?
    My report has only 3 input parameters like Period from to To
    Document Type from to To
    Year Please guide me.
    Thanks
    Meenkahsi.N

    One of the field size that collects data from the database read and populating into that internal table is small. You need to change the program; look into the fields size and increment the size
    For example if it looks like
          PFELD(8)        TYPE P,
    Changing it to the required size say for example
          PFELD(9)        TYPE P,

  • ST22 - Name of runtime error

    Hello, i would like to understand the meaning of the following runtime error .. could you please explain me what they mean?
    TIME_OUT                    
    OPEN_DATASET_NO_AUTHORITY   
    MESSAGE_TYPE_X              
    SYNTAX_ERROR                
    RAISE_EXCEPTION             
    ITS_TEMPLATE_NOT_FOUND      
    RFC_NO_AUTHORITY            
    DYNPRO_FIELD_CONVERSION     
    COLLECT_OVERFLOW_TYPE_P     
    LIST_ILLEGAL_PAGE           
    CONNE_IMPORT_CONVERSION_ERROR
    GETWA_NOT_ASSIGNED     
    TIME_OUT               
    UNCAUGHT_EXCEPTION     
    CALL_FUNCTION_SEND_ERROR
    Thanks in advance.

    Open the dumps one by one and read all the content.
    It shows you What happend? Erro analysis, how to correct? etc.
    Regards,
    Nick Loy

  • GD13: TSV_TNEW_PAGE_ALLOC_FAILED dump: workaround

    Hi All,
    While executing GD13 transaction code for some functional activity, the below dump is occurring:
    TSV_TNEW_PAGE_ALLOC_FAILED.
    Neither we can not change any memory parameter for a temporary activity which is consuming high memory, nor the activity can be splitted in steps for business criticality. From BASIS point view what can be done to make the GD13 activity successful avoiding the memory issue?
    System ECC 6.0 (Netweaver 2004s)
    (One SAP note: 600176, is available, but that is for lower version, not applicable for ECC 6.0)
    Please share your ideas.
    Thanks & Regards,
    Sujit.

    Hi Juan,
    Thanks for reply.
    Below is the dump details:
    Runtime Errors         TSV_TNEW_PAGE_ALLOC_FAILED
    Short text
         No more storage space available for extending an internal table.
    What happened?
         You attempted to extend an internal table, but the required space was
         not available.
    Error analysis
        The internal table "\FUNCTION=G_GLU1_ACCUMULATE_FOR_GD13\DATA=E_T_GLU1_***"
         could not be further extended. To enable
        error handling, the table had to be delete before this log was written.
        As a result, the table is displayed further down or, if you branch to
        the ABAP Debugger, with 0 rows.
        At the time of the termination, the following data was determined for
        the relevant internal table:
        Memory location: "Session memory"
        Row width: 4764
        Number of rows: 966
        Allocated rows: 966
        Newly requested rows: 2 (in 1 blocks)
        Last error logged in SAP kernel
        Component............ "EM"
        Place................ "SAP-Server gvaaps51_PRD_51 on host gvaaps51 (wp 5)"
        Version.............. 37
        Error code........... 7
        Error text........... "Warning: EM-Memory exhausted: Workprocess gets PRIV "
        Description.......... " "
        System call.......... " "
        Module............... "emxx.c"
        Line................. 1897
        The error reported by the operating system is:
        Error number..... " "
        Error text....... " "
    I am not pasting the SAP recommendation for increasing memeory parameter values, which is generic.
    I have to find some way to help the team to execute the activity.
    Your suggestion will be highly appreciated.
    Thanks & Regards,
    Sujit.

  • Error in starting Adobe Bridge in Photoshop CS2

    I've just installed Photoshop CS2, however upon opening Adobe Bridge this error message appears " The application has failed to start because libagluc28.dll was not found. Reinstalling to application may fix the problem"
    I have reinstalled and click repair but to no avail
    I followed Adobe Support Knowledgebase solution and run CMD and this appears:
    c:Documents and Settings/Jesus M Ferraris>
    then i entered the command
    cacls c:\windows\installer /T /E /C /G administrators:F
    but an error message appears - 'cacls' is not recognized as an internal or external command, operable program or bathc file
    I also entered the next command
    cacls "c:\documents and setting\all users" /Y /E /C /G administrators:F
    still the same error as above appears, Please help, have I miss something or was my procedure correct...
    P4, 512ram, WXP 80gHD

    Very useful.
    Good Luck.
    My
    Si
    tes

Maybe you are looking for

  • How to add an animated gif (or something similar)

    I'm running Keynote 3.02. I'm wondering how to do the following: I want to show a book cover, that rotates through about 7 book titles, automatically. What is the method for doing this? Do I have to create an animated gif of the book cover, or is the

  • R 12 - Supplier Data Migration Issue

    Problem Description: supplier open interface import some rejected records are there in interface table, after rectify the data and again try to submit the standard supplier import program, but none of the records are getting picked up even after upda

  • Electronic bank statement upload to SAP....?

    HI experts, plz can any body pass some inputs on electronic bank statement. we have received the electronic statement from bank recently and want to upload it to SAP. we try to use the TCODE: FF_5 to import the statement. but in that tcode need to sp

  • Collective reject for Purchase Requisition

    Hi all, How can I reject Purchase Requisition on ME55 without using workflow? Best regards, Munur

  • Organization unit description translation

    I would like to know how to maintain different languages for the organization unit description. I have tried to use PP10, however, there seems no options to add another languages. Thank you in advance. Ken