Problem during number assignment cause record in iw51

Hi,
I am getting this error message in my BAPI(BAPI_SERVNOT_CREATE).Kindly suggest me what am I supposed to do.
I am getting the error message while assigning the cause code "Problem during number assignment cause record "
Please have a look at my code link.
http://selectoptions.blogspot.in/2012/03/report-zbapiservnot.html#!http://selectoptions.blogspot.com/2012/03/report-zbapiservnot.html
Edited by: dhinesh on Mar 8, 2012 5:05 AM
Edited by: dhinesh on Mar 8, 2012 5:08 AM

Hi
The problem seems to be with the number range given for the document type you are using for posting the rentals.  When you simulate it, no posting happens that is why the problems does not come fore.  But when the tick is removed, the system is trying to post the document and at the same time, since the number range of internal type, the number you are giving (hope you might be giving some number for your document or something like that..I am not sure about your module), the system throws the error.  Please check your entry.

Similar Messages

  • Portal Master-detail form how to auto assign detail record sequence number

    Portal Master-detail form how to auto assign detail record sequence number.Please help me?

    You can just read the following section
    Can I specify a sequence number generator as the default value for a form column?
    Yes. Enter the following in the "default value" field for the column:
    #<schema name>.<sequence name>.nextval
    where <schema name> is the name of the schema containing the sequence, and <sequence name> is the name of the sequence. The entry is preceded by a "#".
    For example, if the schema name is "SCOTT", and the sequence name is "CUSTOMER_SEQ", the default value entry is:
    #SCOTT.CUSTOMER_SEQ.NEXTVAL
    same way you can do for master - detail form.
    for more information on forms please refer the following URL.
    http://otn.oracle.com/products/iportal/htdocs/portal_faq.htm#BuildingApplications
    hope it helps.

  • Problem during recording of CJ01 transaction for BDC

    Hello All,
    I am facing a problem during the recording of CJ01(Creation of projects) for creation of BDC program.
    The problem is during recoding after entering the the first WBS element with level 1 and i press the '+' button to create one more WBS element with Level 2,the first WBS element with level 1 goes to the bottom of the line item table and now if try to create the WBS element with Level 2 in the first line it wont allow because of the fact that the first WBS element with level 1 is below it.
    But if i enter the first WBS elemnt with level 1 in the first line and come to the next line and enter the second WBS element with level 2 it will create since Level 2 WBS is below the Level 1.But this approach cannot be used for doing a recording for BDC.
    Can anyone help in this regard
    Thanks in Advance
    Ashish

    Hi Ashish,
    NW is right. It is not proper to use BDC for CJ01. You can get more control and better if you use the following function module,
    1. Use BAPI_PROJECT_MAINTAIN to create project definition and WBS Structure in one go. see the documentation in the BAPI on how to use this. Post the resut in this forum if you face any problem
    Ravi

  • Workflow problem during import of record

    Hi,
    We get the following error message during import of a record using import manager:
    4868 2008/05/05 15:12:44.861   Error: RC: 0x80000002  
    4868 2008/05/05 15:12:44.861   Error: Failed to GetJoinCheckOutRights. [Owner: WF Owner; Job Id: 10165]
    Currently we run MDM 5.5 SP6 Patch 2. This worked fine before we applied patch 2.
    The record is getting checked out in the start step. If I don't check out the record it's working fine.
    Any ideas?
    BR
    Michael

    Hi Michel,
    In the start step of Workflow there is an option as check out records(yes,no). select yes, when record will enter into the workflow it will get check out and no user will be able to modify that record throughout the workflow cycle.I you want that the record shoukd be modyfied at some step during workflow then you have to give rights (Join CheckOut )to the user of that Particular step.For That login with the user which is the Owner  of the workflow now rightclick onthe record or records and check in\out->Modify Join Permissions->either add a user or a role.This will allow that added user to Join check out and modify the Record(right clik on record ->join check out).Then that User will be able to modify the records.
    Reward if Helpfull.
    With Regards,
    Vinay Yadav

  • Missing orders in SAP-Gap in number assignment

    Hi All,
      We are facing a situation where there is a gap in assignment of order numbers. We do not have any trace of any failed updates in ST22,SM13 etc. Does this mean that we missed only a "number" and not an order? Is there any other transactions that can give some information on this? Do we have any documentation from SAP that talks of a similar isssue?

    <b>This is report I used for FI = RFBNUM00</b>
    <b>FYI</b>
    OSS note # 62077
    Symptom
    Gaps (jumps) occur when allocating internal numbers.
    The status of the number range does not match the number last assigned.
    The number assignment does not match the insert sequence.
    Additional key words
    Document number, Number range, Number range object, Buffering, Current number level, Trip number assignment, Number interval, CO document, CO actual posting, Inspection lot, Material document, Physical inventory document, Production order number, Planned order number, Process order number, Maintenance order number
    FB01 VF01 KO88 KE21 KE11 FD01 FK01 XK01 XDN1 MB01 MB0A MB11 MB1A MB1B MB1C MB31 KANK KB11 KB13 KB14 KB41 KB43 KB44 KB21 KB23 KB24 KB31 KB33 KB34 KB51 KB53 KB54 PR01 PR02 PR03 PR04 PR05 XD01 VD01 MK01 SNUM SM56 SNRO VL01 VL02 CO01 CO40 CO41 VA01 MR1M MIRO
    Cause and prerequisites
    A large number of number range objects are buffered. When buffering a number range object, numbers are not updated individually in the database, rather, the first time a number is requested, a preset group of numbers are reserved in the database (depending on the number range object) numbers and these are made available to the application server in question. The following numbers can then be taken directly from the application server buffer. If the application server buffer is exhausted, new numbers are generated in the database.
    The effects described under "Symptom" then result:
    If an application server is shut down, the numbers that were left in the buffer (that is, that are not yet assigned) are lost. As a result, there are gaps in the number assignment.
    The status of the number range interval reflects the next free number that was not yet transferred to an application server for intermediate buffering. The current number level does not therefore display the number of the "next" object.
    The current number level (per server) can be displayed with Transaction SM56. Start Transaction SM56 and branch into the menu 'Goto' -> 'Entries'. In the dialog box, enter the client, the affected number range object (for example, RK_BELEG) and possibly the required subobject (corresponds to the controlling area for the object RK_BELEG).
    If several application servers are used, the chronological insert sequence is not determined by the numerical sequence on the individual hosts, due to the separate buffering on the various servers.
    Buffering the number range objects has a positive effect on the performance as no database access (on the number range table NRIV) is required for every posting. Furthermore, a serialization of this table (database blocking) is prevented to a large extent so that posting procedures can be carried out in parallel.
    Solution
    As the number range buffering does not lose any guaranteed attributes, a correction is not required.
    If you still require a continuous allocation, you can deactivate the number range buffering deliberately for individual objects.
    Proceed as follows:
    - Start transaction SNRO and enter the affected object
    - Press the 'Change' pushbutton
    - Deactivate buffering: Menu 'Edit' -> 'Set buffering' -> 'No   buffering'
    - If you only want to change the buffer size, please enter the   corresponding value in the field 'No. of numbers in buffer'
    - Save the changes
    Please note that this change is a modification. The modification is overwritten as soon as the affected number range object is redelivered - that is, you must check the change manually after every import of a maintenance level.
    For the the following number range objects, gaps may cause user insecurity as a sequential numbering is 'expected':
    Area CO:
    - RK_BELEG   (CO Document)
    Caution: Note that the problems described in notes 20965 and 29030 can occur if you deactivate the buffering.
    - COPA_IST   (Document number during actual posting)
    - COPA_PLAN  (Document number with planned posting)
    - COPA_OBJ   (Business segment number)
    Area FI:
    - DEBITOR    (Customer master data)
    - KREDITOR   (Vendor master record data)
    Area HR:
    - RP_REINR   (Trip numbers)
    Area PM, PP, PS
    - AUFTRAG    (Order number, Production, Process, Maintenance order
    number, network number.)
    - QMEL_NR    (Number range message)
    Area MM:
    - MATBELEG   (Material documents)
    - MATERIALNR (Material master)
    Area QM:
    - QLOSE      (Inspection lots in QM)
    - QMEL_NR    (Number range message)
    - QMERK      (Completion confirmation number)
    - QMERKMALE  (Master inspection characteristics in QSS)
    - QMERKRUECK (Completion confirmation number of an inspection
                 characteristic in QM results processing)
    - QMETHODEN  (Inspection methods in QM)
    - ROUTING_Q  (Number ranges for inspection plans)
    - QCONTROLCH (Control chart)
    Area Workflow:
    - EDIDOC     (IDocs)
    Number range buffering can be activated or deactivated at any time.
    Number range objects that must be continuous due to legal specifications (for example RF_BELEG, RV_BELEG), or due to a corresponding application logic must not be buffered with the buffering type 'Main memory buffering'. Please see also notes 37844 (for RF_BELEG) and 23835 (for RV_BELEG).
    Source code corrections
    Thanks
    SK

  • Facing problem during uploadation of Routing data using CA01-BDC - URGENT

    Dear All,
    When I am trying to upload Routing data using CA01 in the Table Control scenario, then I am facing problem as my last 2 records are not getting uploaded from my Test file.
    For example, I am having 47 records in my Test File and after setting ‘Default size’ parameters (to avoid screen resolution problem)
    I have 15 table control line items data per page. The Page down logic ('=P+') is working fine, but my below BDC code failed to take
    the remainder last 2 records from the Test File.
    Analysis: When I am running my “Call Transaction” bdc in foreground, then the 1st page down occurs after 15th record, 2nd page down occurs after 29th record( as in Table Control 1st page’s 15th record is coming on the Top of 2nd page). 3rd page down occurs after 43rd record
    (as 2nd page’s 29th record is coming on the top of 3rd page). In the 4th Table Control Page 43rd record of previous page is coming on top, and then it’s taking 44th & 45th records from the Test File and then it is triggering SAVE (=BU). Thus, our last 2 records
    (i.e. 46th, 47th record) are not getting uploaded in the routing screen from our Test File.
    If anybody has encountered this scenario previously, please help me URGENTLY in fixing the bugs here. It’s VERY, VERY URGENT…
    FYI. For others 45 successful records already uploaded, all the screen fields values are coming properly in the routing screen, and here there is no issue.
    Thanks very much…
    Thanks & Regards
    Sudipta – Project Lead
    Volvo Client Location
    I am pasting my BDC source code below:
    REPORT ZRT1_UPLOAD_CA01_F
                           NO STANDARD PAGE HEADING
                           LINE-SIZE 255.
                            I N C L U D E S                              *
    Include for Data Declarations
    INCLUDE zrout_top.
    Include for Forms
    INCLUDE zrout_form.
    INCLUDE zrout_include_f_ca01.
    *AT SELECTION-SCREEN ON VALUE-REQUEST FOR <field>
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR P_FILE.
    Attaching F4 help with filename
      PERFORM F1001_GET_F4.
               S T A R T   -   O F  -  S E L E C T I O N                 *
    START-OF-SELECTION.
    Perform to read the input file
      PERFORM f_read_file.
    Perform to fill the BDC data
      PERFORM f_fill_bdctab.
                   E N D   -   O F  -  S E L E C T I O N                 *
    END-OF-SELECTION.
      FREE: i_bdcdata,
            i_messtab,
            i_record.
    x----
    *&  Include           ZROUT_TOP                                        *
                      D A T A B A S E    T A B L E S                     *
    TABLES: t100.          "Messages
                    D A T A    D E C L A R A T I O N S                   *
    T A B L E    T Y P E S *****************************
    For input data
    TYPES: BEGIN OF ty_record,
            matnr(18),  "Material Number
            werks(4),   "Plant
            verwe(3),   "Usage
            statu(3),   "Status
            arbpl(8),   "Work Center
            steus(4),   "Control Key
            ltxa1(40),  "Description of Operation
            bmsch(13),  "Base Quantity
            meinh(3),   "Unit of Measure
            vgw01(11),  "Machine
            vge01(3),   "Unit of measure of activity
          END OF ty_record.
    I N T E R N A L    T A B L E S ***********************
    Internal Table for input file name
    DATA: i_file_tab  TYPE STANDARD TABLE OF sdokpath   INITIAL SIZE 0.
    Internal Table for BDC Data
    DATA: i_bdcdata   TYPE STANDARD TABLE OF bdcdata    INITIAL SIZE 0.
    Internal Table for BDC Messages
    DATA: i_messtab   TYPE STANDARD TABLE OF bdcmsgcoll INITIAL SIZE 0.
    Internal Table for Input file
    DATA: i_record TYPE STANDARD TABLE OF ty_record INITIAL SIZE 0.
    W O R K      A R E A S *************************
    Work Area for input file name
    DATA: wa_file_tab LIKE sdokpath.
    Work Area for BDC Data
    DATA: wa_bdcdata LIKE bdcdata.
    Work Area for BDC Messages
    DATA: wa_messtab LIKE bdcmsgcoll.
    Work Area for Input file
    DATA: wa_record TYPE ty_record.
    V A R I A B L E S ****************************
    DATA: v_filename TYPE string,
          v_fnam(40) TYPE c.
    DATA: wa_opt TYPE ctu_params.
    C O N S T A N T S ***************************
    CONSTANTS: c_werks TYPE rc27m-werks VALUE 'tp',
               c_steus TYPE plpod-steus VALUE 'PP01'.
    *Selection Screen.
    SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
    PARAMETERS:
              Input file name
                P_FILE TYPE rlgrap-filename OBLIGATORY. " DEFAULT 'C:\'.
    SELECTION-SCREEN END OF BLOCK B1.
    x----
    *&  Include           ZROUT_FORM                                       *
    *&      Form  f_fill_bdctab
          Form to fill the BDC Data
    FORM f_fill_bdctab.
      TABLES mapl.          "Assignment of Task Lists to Materials
      DATA: l_cnt_item(3)  TYPE n VALUE 1.    "Line item counter
      DATA: first(3)  TYPE n VALUE 16.    "Line item counter
      DATA: next(3)  TYPE n .    "Line item counter
      DATA: lin(3) TYPE n .    "Line item counter
      DATA: l_v_bmsch(13),   "Base qty
            l_v_meinh(3),    "Unit of Measure
            l_v_vgw01(11),   "Machine
            l_v_vgw02(11),   "Labour
            l_v_vge01(3).    "Unit of measure of activity
      DATA l_v_nextline TYPE sy-tabix.
      DATA wa_temp TYPE ty_record.
        Initialize Counter
          l_cnt_item = 1.
      SORT i_record BY matnr.
      LOOP AT i_record INTO wa_record.
    AT NEW matnr.
        REFRESH: i_bdcdata,
                 i_messtab.
        SET PARAMETER ID 'PLN' FIELD space.
        SET PARAMETER ID 'PAL' FIELD space.
        PERFORM f_bdc_dynpro      USING 'SAPLCPDI' '1010'.
        PERFORM f_bdc_field       USING 'BDC_OKCODE'
                                        '/00'.
      Material Number
        PERFORM f_bdc_field       USING 'RC27M-MATNR'
                                        wa_record-matnr.
       Plant
        PERFORM f_bdc_field       USING 'RC27M-WERKS'
                                        c_werks.
        PERFORM f_bdc_field       USING 'RC271-PLNNR'
      Check if routing already exits for the material
        SELECT * FROM mapl
                      INTO mapl
                                WHERE matnr EQ wa_record-matnr
                                  AND werks EQ c_werks
                                  AND plnty EQ 'N'.
          IF sy-subrc EQ 0.
            PERFORM f_bdc_dynpro      USING 'SAPLCPDI' '1200'.
            PERFORM f_bdc_field       USING 'BDC_OKCODE'
                                            '=ANLG  '.
          ENDIF.
        ENDSELECT.
        perform f_bdc_dynpro      USING 'SAPLCPDA' '1200'.
        perform f_bdc_field       USING 'BDC_OKCODE'
                                  '=VOUE'.
    Group Counter
        perform f_bdc_field       USING 'PLKOD-PLNAL'
      Usage
        PERFORM f_bdc_field       USING 'PLKOD-VERWE'
                                        '1'.
      Status
        PERFORM f_bdc_field       USING 'PLKOD-STATU'
                                        '4'.
    ENDAT.
        PERFORM f_bdc_dynpro      USING 'SAPLCPDI' '1400'.
      Check if page is full
        IF l_cnt_item EQ '16'.
        Page down
          PERFORM f_bdc_field       USING 'BDC_OKCODE'
                                               '=P+'.
          l_cnt_item = 1.
    ELSE.
    PERFORM f_bdc_field       USING 'BDC_OKCODE'
                                  '/00'.
    ENDIF.
       CLEAR v_fnam.
      Populate item level details
    Work Center
        CONCATENATE 'PLPOD-ARBPL(' l_cnt_item ')' INTO v_fnam.
        PERFORM f_bdc_field       USING v_fnam
                                        wa_record-arbpl.
      Control Key
        CONCATENATE 'PLPOD-STEUS(' l_cnt_item ')' INTO v_fnam.
        PERFORM f_bdc_field       USING v_fnam
                                        c_steus.
      Description of Operation
        CONCATENATE 'PLPOD-LTXA1(' l_cnt_item ')' INTO v_fnam.
        PERFORM f_bdc_field       USING v_fnam
                                        wa_record-ltxa1.
      Base Quantity
        CONCATENATE 'PLPOD-BMSCH(' l_cnt_item ')' INTO v_fnam.
        PERFORM f_bdc_field       USING v_fnam
                                        wa_record-bmsch.
      Unit of Measure
        CONCATENATE 'PLPOD-MEINH(' l_cnt_item ')' INTO v_fnam.
        PERFORM f_bdc_field       USING v_fnam
                                        wa_record-meinh.
      Machine
        CONCATENATE 'PLPOD-VGW01(' l_cnt_item ')' INTO v_fnam.
        PERFORM f_bdc_field       USING v_fnam
                                        wa_record-vgw01.
      Labour
       CONCATENATE 'PLPOD-VGW02(' l_cnt_item ')' INTO v_fnam.
       PERFORM f_bdc_field       USING v_fnam
                                       wa_record-vgw02.
      Unit of measure of activity
        CONCATENATE 'PLPOD-VGE01(' l_cnt_item ')' INTO v_fnam.
        PERFORM f_bdc_field       USING v_fnam
                                        wa_record-vge01.
          l_cnt_item = l_cnt_item + 1.
       CLEAR wa_record.
    AT END OF matnr.
         PERFORM f_bdc_field       USING 'BDC_OKCODE'
                                  '/00'.
          PERFORM f_bdc_field         USING 'BDC_OKCODE'
                                  '=BU'.
         wa_opt-DISMODE = 'A'.
         wa_opt-DEFSIZE = 'X'.
         wa_opt-UPDMODE = 'S'.
        PERFORM f_bdc_transaction USING 'CA01'.
       Initialize Counter
         l_cnt_item = 1.
    ENDAT.
      ENDLOOP.
    ENDFORM.                    " f_fill_bdctab
    x----
    *&  Include           ZROUT_INCLUDE_F_CA01                             *
    *&      Form  f_read_file
          Form to read the file from presentation server
    FORM f_read_file .
    To get the file name
      DATA l_v_file TYPE string.
    l_v_file = P_FILE.
    CALL FUNCTION 'GUI_UPLOAD'
          EXPORTING
            filename                = l_v_file
            filetype                = 'ASC'
            has_field_separator     = 'X'
          TABLES
            data_tab                = i_record
          EXCEPTIONS
            file_open_error         = 1
            file_read_error         = 2
            no_batch                = 3
            gui_refuse_filetransfer = 4
            invalid_type            = 5
            no_authority            = 6
            unknown_error           = 7
            bad_data_format         = 8
            header_not_allowed      = 9
            separator_not_allowed   = 10
            header_too_long         = 11
            unknown_dp_error        = 12
            access_denied           = 13
            dp_out_of_memory        = 14
            disk_full               = 15
            dp_timeout              = 16
            OTHERS                  = 17.
        IF sy-subrc <> 0.
          MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                  WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
        ENDIF.
    ENDIF.
    ENDFORM.                    " f_read_file
    *&      Form  f_bdc_dynpro
          Form to populate BDC Tab for new screen
         -->fp_program   Screen program name
         -->fp_dynpro    Screen Number
           Start new screen                                              *
    FORM f_bdc_dynpro USING fp_program fp_dynpro.
      CLEAR wa_bdcdata.
      wa_bdcdata-program  = fp_program.
      wa_bdcdata-dynpro   = fp_dynpro.
      wa_bdcdata-dynbegin = 'X'.
      APPEND wa_bdcdata TO i_bdcdata.
    ENDFORM.                    "f_bdc_dynpro
    *&      Form  f_bdc_field
           Insert field                                                  *
    FORM f_bdc_field USING fp_fnam fp_fval.
      IF NOT fp_fval IS INITIAL.
        CLEAR wa_bdcdata.
        wa_bdcdata-fnam = fp_fnam.
        wa_bdcdata-fval = fp_fval.
        APPEND wa_bdcdata TO i_bdcdata.
      ENDIF.
    ENDFORM.                    "f_bdc_field
    *&      Form  f_bdc_transaction
          Call transaction and error handling
         -->fp_tcode   Transaction code
    FORM f_bdc_transaction  USING fp_tcode.
      DATA: l_mstring(480),
            l_color         TYPE i,
            l_mode          TYPE c.
      REFRESH i_messtab.
    CALL TRANSACTION fp_tcode USING i_bdcdata
                       OPTIONS FROM wa_opt
                       MESSAGES INTO i_messtab.
    Messages during upload
      LOOP AT i_messtab INTO wa_messtab.
        CASE wa_messtab-msgtyp.
          WHEN 'S'.
            l_color = 5.
          WHEN 'E'.
            l_color = 6.
          WHEN 'W'.
            l_color = 3.
        ENDCASE.
        FORMAT COLOR = l_color.
        SELECT SINGLE * FROM t100 WHERE sprsl = wa_messtab-msgspra
                                  AND   arbgb = wa_messtab-msgid
                                  AND   msgnr = wa_messtab-msgnr.
        IF sy-subrc = 0.
          l_mstring = t100-text.
          IF l_mstring CS '&1'.
            REPLACE '&1' WITH wa_messtab-msgv1 INTO l_mstring.
            REPLACE '&2' WITH wa_messtab-msgv2 INTO l_mstring.
            REPLACE '&3' WITH wa_messtab-msgv3 INTO l_mstring.
            REPLACE '&4' WITH wa_messtab-msgv4 INTO l_mstring.
          ELSE.
            REPLACE '&' WITH wa_messtab-msgv1 INTO l_mstring.
            REPLACE '&' WITH wa_messtab-msgv2 INTO l_mstring.
            REPLACE '&' WITH wa_messtab-msgv3 INTO l_mstring.
            REPLACE '&' WITH wa_messtab-msgv4 INTO l_mstring.
          ENDIF.
          CONDENSE l_mstring.
          WRITE: / wa_messtab-msgtyp, l_mstring(250).
        ELSE.
          WRITE: / wa_messtab.
        ENDIF.
        FORMAT COLOR OFF.
      ENDLOOP.
      SKIP.
    ENDFORM.                    " f_bdc_transaction
    FORM F1001_GET_F4.
      CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
           EXPORTING
                PROGRAM_NAME  = SY-REPID
                DYNPRO_NUMBER = SY-DYNNR
                FIELD_NAME    = P_FILE
           CHANGING
                FILE_NAME     = P_FILE
           EXCEPTIONS
                MASK_TOO_LONG = 1
                OTHERS        = 2.
      IF SY-SUBRC <> 0.
      File is not selected
       MESSAGE I000 WITH TEXT-M01.
      ENDIF.
    ENDFORM.                    " F1001_GET_F4

    Sudipta,
    Would request you to post this to ABAP-Forum for Immediate response.
    I had this problem, but the ABAP guy did something to correct this...it was more of screen resoultion difference between the recorded system and uploading system. Please try to use the same system which was used to record and try.
    Regards,
    Prasobh

  • Error in ME59N- Internal number assignment not defined

    Dear Gurus,
    I am getting error while creation automatic PO In ME59N. Below error shows
    Error Name                                                                                          Massage Class          Msg No.
    PO header data still faulty                                                                            MEPO                         2
    Internal number assignment not defined (please enter number)                 06                              243
    I have done all the setting in Material Master & Vendor Master for Automatic PO Genration.
    Aslo I maintain the source list & Info Record for two vendors.
    I assign the source of supply in PR .
    After this when I exicute in ME59N by entering Purchasing Group,Purchasing Org,Vendor,Plant
    and selected the Per Requisition , per contract tab.
    After this above two error is showing in SAP.
    Kindly give us solution to resolve this.
    Regards.
    Abhijit
    Subject changed to reflect the issue-  by: Jürgen L

    Hi
    The error is probably caused because T161-NUMKI = <BLANK>.  Please fill in the valid T161-NUMKI.
    This needs to be done in customising
    Transaction SPRO:
    Material Management -> Purchasing -> Purchase Order -> Define Document types
    Number range in the case of internal number assignment
    Hope this information is of help.
    Kind regards,
    Lorraine

  • Another headache - "problem during multiplexing/burning - buffer underun"

    Hello,
    After many iDVD issues, I finally got the burning process of my DVD project.. half way through it spit my DVD out and gave me these errors:
    "Errors were found during the burning process:
    The recording device reported the illegal request: Buffer underrun. (0x21, 0x02.)
    There was a problem during multiplexing/burning."
    What is this supposed to mean and how can I just burn my DVD?!
    Thank you
    Chris

    It's been awhile since I had this error crop up, but I know 2 things that caused it for me.
    1) Bad DVD media. It happened a lot when I was using some cheap, off brand DVD-R's. More expensive Sony DVD's (white) are much better and more consistent. The time wasted with bad burns from bad DVD media is not worth the time.
    2) It could even be a problem with your DVD burner going bad, but not as likely. I did have one SuperDrive replaced because I started having problems even with good, recommended brands of DVD media. However, try getting some better media first. Don't buy a lot. Try out a small package of DVD-R's or buy a disc from a friend to try out a brand.
    Another possibility: Lack of sufficient, free hard drive space. Here's what iDVD 6's help file states:
    "When you burn a DVD, iDVD needs temporary space on your hard disk to store the encoded files for your project. To burn your iDVD project to a single-layer DVD disc, you should have 8-10 gigabytes (GB) of free disk space on your hard disk, depending on the size of your project. For double-layer discs, you need 15-20 GB of free disk space."
    Apple's support forum describes "buffer underrun" in more detail at:
    http://docs.info.apple.com/article.html?artnum=25750
    You might also want to defragment your hard drive (with a 3rd party utiility such as TechTool Pro) so that you don't have a fragmented hard drive when burning a project.
    Speed rating of the DVD media- I've seen reports that some SuperDrives work best with media rated the same as it's maximum burn speed (e.g.- 4x, 8x, etc). I would recommend trying to find media that matches your SuperDrive burn speed, or no more than twice the burn speed.
    I hope this helps.
    You didn't mention which version of iDVD you were using or which operating system. Sometimes other users have tips for you based on that info.

  • Error message "ABAP/4 error during dynamic assign" on Contact Log creation.

    Hi,
    We are getting an error message "ABAP/4 error during dynamic assign beyond program bounds" displayed while creating contact log using transaction "BCT0".
    Please suggest what can be the cause of the error message and how we can remove this.
    Regards,
    Ankur

    Dear Andrea,
    The problem should be caused by obsolete entry in table TFAWX.
    Please use the following report for deleting this entry (test before in your TEST System) :
    report ztfawx_delete.
    tables tfawx.
    select single * from tfawx where prog = 'SAPLBAS0'
                                and bldgr = '0110'
                                and mnum  = 23
                                and bfeld = '$GENERAL'.
    if sy-subrc = 0.
      delete tfawx.
    endif.
    Buona Giornata
    Mauro

  • ABAP/4 error during dynamic assign beyond program bounds

    Hi experts,
    after an Upgrade from 4.7 to Ecc 6.0, launching transaction AC03 an error (green light) occours "ABAP/4 error during dynamic assign beyond program bounds".
    Is there anyone who find a solution?
    regards
    andrea brescia
    Edited by: andrea brescia on Nov 30, 2010 1:42 PM

    Dear Andrea,
    The problem should be caused by obsolete entry in table TFAWX.
    Please use the following report for deleting this entry (test before in your TEST System) :
    report ztfawx_delete.
    tables tfawx.
    select single * from tfawx where prog = 'SAPLBAS0'
                                and bldgr = '0110'
                                and mnum  = 23
                                and bfeld = '$GENERAL'.
    if sy-subrc = 0.
      delete tfawx.
    endif.
    Buona Giornata
    Mauro

  • KEB2: "number of transfered records number of read records" why?

    Hi
    I have an extractor 1_CO_PA_1000. When I go to R/3 and look in KEB2 I can see that less records are transfered than read for 1_CO_PA_1000. This is not the case for every request - some times every read record is transfered.
    Does anybody have some ideas about how that can be?
    Kind regards,
    Torben

    Key Figure Model versus Account Model in CB COPA
    Key Figure model:
    Pros:
    •     Number of records is lesser as a result of having more key figures.
    •     Direct access to key figures during run time.
    •     Better data read performance as number of records is lesser - read time depends more on number of records than the size of records. Lesser records = shorter read time.
    Cons:
    •     Lack of flexibility when it comes to redesigning- additional key figure may need to be added.
    •     Length of each record is more as a result of having more key figures and may generate empty cells if not all key figure are used.
    Account model:
    Pros:
    •     Changes to data model can be made more easily. Does not require adding more key figures.(Please refer to the example)
    •     Length of each record is lesser.
    •     If a key figure is not being used, no additional record is added.
    Cons:
    •     Number of records is very high.
    •     Data read performance is affected as result of having more records.
    Example:
    If we want to assign multiple key figures (say different type of revenues) to a characteristic (say customer), we can use both the key figure model as well as the account model.
    Key figure model;
    Customer  Revenue1    Revenue2    Revenue3
       A               10                20                50
             B               10                                    40
    Account model:
    Customer   Revenue type    Revenue
             A                1                       10
             A                2                       20
             A                3                       50
             B                1                       10
             B                3                       40
    As we can see from the above example, the key figure model has lesser number of records than the account model. We can easily add an additional revenue type in the account model, because we do not have to change the data structure. In the key figure model, on the other hand, we must add an additional key figure to the structure.
    It is also apparent that the number of additional records in the account model depends on the number of key figures that are actually being used. If there are key figures that are not being used, then empty cells are generated in the key figure model but no additional records are added in the account model.
    The amount of memory required to store data depends on both the number and the length of records. The number and length depend on the model chosen. The key figure or the account model will require lesser memory depending on the relationship between record length and number of records. The key figure model is better when all or most of the key figures are filled up in each record. The account based model is better if we expect to have key figures that will not be filled up.
    For CB COPA, below two concerns started the conversation to explore key figure model.
    •     Expected additional duration during ETL (for transposing key figure model records into account model).
    •     Better performance with key figure model started the discussion to go for key figure model.
    But, below reasons are strongly driving towards account model for CB COPA.
    •     Key figure model does not offer the query flexibility which account model offers. With account model, a specific line in P&L can freely be selected and drilled down (Flexible column reports, for example).
    •     Account model tends to be easier to explain to user community. When some body drills down on Account in P&L, Account model will give amounts in transparent accounts. With key figure model, account drill down won’t be straight forward. (COGS value field in CB COPA won’t give corresponding COGS account number during account drill down).
    •     All other data models (CCA, Oracle data, Freight, Layers, and Adjustment) will be in account model and making just CB COPA to key figure model will result in complexity in query creation and maintenance.
    •     It has been tried and tested with satisfactory results with AB-COPA. If it ain’t broke, don’t fix it!!

  • Very very strange backflush problem during CO11N?????

    Hi Experts,
    I find a strange backflush problem during production order confirmation by CO11N,described as follows:
    When I create a production order for material A ,total order quantity is 2,means to produce 2 material A. For produce 2 material A, we need 10 material B.That means in the component view of this produciton order ,there is item material B, and the requirement quantity is 10.
    After release the production order,when doing order confirmation by CO11N,I enter 1 for material A in the "Yield to Be Confirmed" ,that means to confirm 1 material A and having done a partial confirmation (Beacause the production order quantity for A is 2) .Because 2 material A need 10 material B,so the system will post goods movement for 5 material B.
    After that ,confirm 1 material A, I change the production order,change the requirement quantity of material B from 10 to 5,and the quantity withdrawn of B is aslo 5 because backflush of confirmation 1 material A.
    And after changing the production order ,when again doing order confirmation by CO11N,I aslo enter 1 for material A in the "Yield to Be Confirmed" ,that means to confirm 1(the last material A to be produced) material A.But because I have already change the BOM of produciton order,2 material A only need 5 material B now,and the quantity withdrawn of B is 5 now.So I think the system should not backflush material B      by this order confirmation.
    But the system aslo backflush material B by this order confirmation,and the backflush quantity is 3 ,not 5,that means system automatic post 3 material B goods movement.
    Very strange, I don't undrestand why?
    I don't konw why system still automatic post goods movement backflush although I have changed the requirement quantity of one material and the Quantity withdrawn is equal to the requirement quantity?
    What is the automatic goods movement logic by production order confiramtion ,especially the backflush quantity logic?
    I aslo want to konw how I can set the backflush component quantity when doing order confirmation by CO11N in customzing through IMG?
    Where can I customzing backflush quantity in IMG?
    Edited by: Fei Liu on Apr 1, 2009 12:11 PM

    duffymo's right (but a little terse in explaining).
    Your "urgent problem" is that your SQL query is unorderd. However duffymo is also pointing out that a little additionaly work can make your code be more standard and work better/faster.
    Instead of doing what you're doing (bouncing to the end of the result set end, getting the number of messages, then walking through the result set backwards), retrieve the data in the order you want to display, using an "ORDER BY x" or "ORDER BY x DESC" clause. Use rs.next() to walk through the records and count them as you do so.
    no_of_mail = 0;
    while (rs.next())
       no_of_mail =  no_of_mail + 1;
        // process here.
    }Many drivers and DBs are designed to be much much more efficient when ResultSets are accessed "in order"; since that's the normal case, that's what is optimized the most.

  • PA-Hiring Error- No Valid Interval Found (Internal Number Assignment)

    Hello,
    Iam facing this issue at the time of hiring (IT0000). The internal number range (01) is maintained in transaction code PA04 from
    00000001 to 90000000     and ext check box is NOT checked.  In NUMKR , the return value of the feature is also 01 .
    Now the weird thing is i get this so called error " No Valid Interval Found (Internal Number Assignment)" but if i ignore the error and hit enter / save , it lets me save IT0000 and moves on to the next infotype in infogroup.
    has anyone faced this before? what could be the possible solution here? Any help would be greatly appreciated
    thank you in advance
    Reema

    Hey Reema,
    http://www.sapfans.com/forums/viewtopic.php?f=11&t=327031
    NUMKR-Feature
    I think these are similar threads and have the same problem as you. Hope Partha Murli picks this up or you might just mail him.
    cheers
    Ajay

  • Mapping problem with compressed key update record

    Hi, could you please advise?
    I'm getting the following problem:
    About a week ago replicat abened with "Error in mapping" error. I found in discard file some record looking like:
    filed1 = NULL
    field2 =
    field3 =
    field4 =
    field5 =
    datefield = -04-09 00:00:00
    field6 =
    field8 =
    field9 = NULL
    field10 =
    Where filed9 = @GETENV("GGHEADER", "COMMITTIMESTAM"), field10 = = @GETENV("GGHEADER", "COMMITTIMESTAM"), others are table fields mapped by USEDEFAULTS
    So I got Mapping problem with compressed key update record at 2012-06-01 15:44
    I guess I need to mention that extract failed in 5 minuts before it with: VAM function VAMRead returned unexpected result: error 600 - VAM Client Report <[CFileInfo::Read] Timeout expired after 10 retries with 1000 ms delay, waiting to read transaction log or backup files. To increase the number of retries, use SETENV (GGS_CacheRetryCount = n) in Extract parameter file. To control retry delay time, use SETENV (GGS_CacheRetryDelay = n). handle: 0000000000000398 ReadFile GetLastError:997 Wait GetLastError:997>.
    I don't know if it has ther same source as data corruption, could you tell me if it is?
    Well, I created new extract, starting 2012-06-01 15:30 to check if there was something with extract at the time, but got the same error.
    If I run extract beging at 15:52 it starts and works.
    But well, I got another one today. Data didn't look that bad, but yet one column came with null value:( And I'm using it as a key column, so I got Mapping problem with compressed key update record again:(
    I'm replicating from SQL Server 2008 to Oracle 11g.
    I'm actually using NOCOMPRESSUPDATES in Extract.
    CDC is enabled for all tables replicated. The only thing is that it is enabled not by ADD TRANDATA command, but by SQL Server sys.sp_cdc_enable_table, does it matter?
    Could you please advise why does it happen?

    Well, the problem begins somewhere in extract or before extract, may be in transaction log, I don't know:(
    Here are extract parameters:
    EXTRACT ETCHECK
    TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
    SOURCEDB TEST, USERID **, PASSWORD *****
    exttrail ./dirdat/ec
    NOCOMPRESSUPDATES
    NOCOMPRESSDELETES
    TABLE tst.table1, COLS (field1, field2, field3, field4, field5, field6, field7, field8 );
    TABLE tst.table2, COLS (field1, field2, field3, field4 );
    Data pump:
    EXTRACT DTCHECK
    SOURCEDB TEST, USERID **, PASSWORD *****
    RMTHOST ***, MGRPORT 7809
    RMTTRAIL ./dirdat/dc
    TABLE tst.table1;
    TABLE tst.table2;
    Replicat:
    REPLICAT rtcheck
    USERID tst, PASSWORD ***
    DISCARDFILE ./dirrpt/rtcheck.txt, PURGE
    SOURCEDEFS ./dirdef/sourcei.def
    HANDLECOLLISIONS
    UPDATEDELETES
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    MAP dbo.TPROCPERIODCONFIRMSTAV, TARGET R_019_000001.TPROCPERIODCONFIRMSTAV, COLMAP (USEDEFAULTS , field5 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed6= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (filed1, field2, field3);
    Rpt file for replicat:
    Oracle GoldenGate Delivery for Oracle
    Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040
    Windows x64 (optimized), Oracle 11g on Apr 22 2011 00:34:07
    Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
    Starting at 2012-06-05 12:49:38
    Operating System Version:
    Microsoft Windows Server 2008 R2 , on x64
    Version 6.1 (Build 7601: Service Pack 1)
    Process id: 2264
    Description:
    ** Running with the following parameters **
    REPLICAT rtcheck
    USERID tst, PASSWORD ***
    DISCARDFILE ./dirrpt/rtcheck.txt, PURGE
    SOURCEDEFS ./dirdef/sourcei.def
    HANDLECOLLISIONS
    UPDATEDELETES
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    MAP dbo.TPROCPERIODCONFIRMSTAV, TARGET R_019_000001.TPROCPERIODCONFIRMSTAV, COLMAP (USEDEFAULTS , field5 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed6= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (filed1, field2, field3);
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 512M
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1G
    CACHESIZEMAX (strict force to disk): 881M
    Database Version:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    Database Language and Character Set:
    NLS_LANG = "AMERICAN_AMERICA.CL8MSWIN1251"
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "CL8MSWIN1251"
    For further information on character set settings, please refer to user manual.
    ** Run Time Messages **
    Opened trail file ./dirdat/dc000000 at 2012-06-05 12:49:39
    2012-06-05 12:58:14 INFO OGG-01020 Processed extract process RESTART_ABEND record at seq 0, rba 925 (aborted 0 records).
    MAP resolved (entry tst.table1):
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    2012-06-05 12:58:14 WARNING OGG-00869 No unique key is defined for table table1. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.
    Using the following default columns with matching names:
    field1=field1, field2=field2, field3=field3, field4=field4, field5=field5, field6=field6, field7=field7, field8=field8
    Using the following key columns for target table R_019_000001.TCALCULATE: field3.
    2012-06-05 12:58:14 WARNING OGG-01431 Aborted grouped transaction on 'tst.table1', Mapping error.
    2012-06-05 12:58:14 WARNING OGG-01003 Repositioning to rba 987 in seqno 0.
    2012-06-05 12:58:14 WARNING OGG-01151 Error mapping from tst.table1 to tst.table1.
    2012-06-05 12:58:14 WARNING OGG-01003 Repositioning to rba 987 in seqno 0.
    Source Context :
    SourceModule : [er.main]
    SourceID : [er/rep.c]
    SourceFunction : [take_rep_err_action]
    SourceLine : [16064]
    ThreadBacktrace : [8] elements
    : [C:\App\OGG\replicat.exe(ERCALLBACK+0x143034) [0x00000001402192B4]]
    : [C:\App\OGG\replicat.exe(ERCALLBACK+0x11dd44) [0x00000001401F3FC4]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x000000014009F102]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x00000001400B29CC]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x00000001400B8887]]
    : [C:\App\OGG\replicat.exe(releaseCProcessManagerInstance+0x25250) [0x000000014028F200]]
    : [C:\Windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007720652D]]
    : [C:\Windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007733C521]]
    2012-06-05 12:58:14 ERROR OGG-01296 Error mapping from tst.table1 to tst.table1.
    * ** Run Time Statistics ** *
    Last record for the last committed transaction is the following:
    Trail name : ./dirdat/dc000000
    Hdr-Ind : E (x45) Partition : . (x04)
    UndoFlag : . (x00) BeforeAfter: A (x41)
    RecLength : 249 (x00f9) IO Time : 2012-06-01 15:48:56.285333
    IOType : 115 (x73) OrigNode : 255 (xff)
    TransInd : . (x03) FormatType : R (x52)
    SyskeyLen : 0 (x00) Incomplete : . (x00)
    AuditRBA : 44 AuditPos : 71176199289771
    Continued : N (x00) RecCount : 1 (x01)
    2012-06-01 15:48:56.285333 GGSKeyFieldComp Len 249 RBA 987
    Name: DBO.TCALCULATE
    Reading ./dirdat/dc000000, current RBA 987, 0 records
    Report at 2012-06-05 12:58:14 (activity since 2012-06-05 12:58:14)
    From Table tst.table1 to tst.table1:
    # inserts: 0
    # updates: 0
    # deletes: 0
    # discards: 1
    Last log location read:
    FILE: ./dirdat/dc000000
    SEQNO: 0
    RBA: 987
    TIMESTAMP: 2012-06-01 15:48:56.285333
    EOF: NO
    READERR: 0
    2012-06-05 12:58:14 ERROR OGG-01668 PROCESS ABENDING.
    Discard file:
    Oracle GoldenGate Delivery for Oracle process started, group RTCHECK discard file opened: 2012-06-05 12:49:39
    Key column filed3 (0) is missing from update on table tst.table1
    Missing 1 key columns in update for table tst.table1.
    Current time: 2012-06-05 12:58:14
    Discarded record from action ABEND on error 0
    Aborting transaction on ./dirdat/dc beginning at seqno 0 rba 987
    error at seqno 0 rba 987
    Problem replicating tst.table1 to tst.table1
    Mapping problem with compressed key update record (target format)...
    filed1 = NULL
    field2 =
    field3 =
    field4 =
    field5 =
    datefield = -04-09 00:00:00
    field6 =
    field8 =
    field9 = NULL
    field10 =
    Process Abending : 2012-06-05 12:58:14

  • Dynamic type conflict during the assignment of references. - Error while generating proxy in the backend

    Hi All,
    I get a short dump while generating a proxy in the backend.I give the package and the prefix and end up with a short dump.
    Does any one know why this mught come up
    "Dynamic type conflict during the assignment of references."
    background: I imported a WSDl provided by legacy into PI and created service interfaces and then trying to generate a proxy class while i get this error.
    Thanks.

    Hi Shyamsundar,
    I will explain a problem that I usually see in some developments:
    XSD originally:                                  XSD transformed:
    Root                                                     -> Root
    Tag 1 type int                                    -> Tag 1 type int
    Tag2 type string                               -> Tag2 type string
    Tag3 type  any                                  - Tag3 type  string
    Normally the tag3 should have a XML inside. Then the ABAPers have to construct the tag3 with  a CDATA structure (CDATA is used to put in an XML tag more XML tags inside like a text and no to be interpreted).
    Later in SAP PI you can extract the cdata with an XSL, you can find some examples in the SCN.
    I don’t like to convert the whole XML in only one string tag, because this makes difficult the develop for the ABAPers, although the work inside the PI is very easy because with an XSL you can extract the whole message easily. (You can find some examples in the SCN)
    Regards.

Maybe you are looking for