Table BWFI_AEDAT

Hello Experts,
           Please explain me about the table : BWFI_AEDAT.
           It is used in FI Extractors, But what is the purpose of this table in FI Extractors.
Thanx,
Niranjan.

The change date on the BKPF header can;t be trusted - so SAP had to come up with another way of finding changed FI documents
They did this by putting a BAPI into the change FI document number function module
Thsi BAPI records the change date and doc nu, year and co code of the record into a table
this table gets read after the new records in the extractor and then uses the from and to timestamp to retrieve those records and then read either BSEG, BSID/BSAD, BSIK/BSAK for the rest of the details (depending on thee extractor)

Similar Messages

  • Concept of the change document table BWFI_AEDAT ???

    concept of the change document table BWFI_AEDAT and the use of the 60 day (user definable) parameter ???
    does this table hold the time stamps for deltas ???
    Could anybody elaborate on this ??
    Was going through some of the posts and came across this...

    Seems one of the 4 replaced positions is going to be filled after all
    Sorry for this aside, couldn't resist

  • Table BWFI_AEDAT empty?

    Hi Guru's,
    I am loading DS 0FI_AP_4 into our BI7 system.  However, I notice that even after running an init load and then a delta the next day, that the BWFI_AEDAT table is empty.
    I perhaps do not understand the purpose of entries in this table?  I thought it was to store all rows already transferred via the delta load (for use in a repeat delta if required) if the corresponding date was still available in the BWOM2_TIMEST table for this DS?  And also for failed rows requested on a prior delta request?
    If not, is there any config I need to do to enable entries to get written into this table?  Our BI system is fairly new and perhaps not fully set up, so maybe we still need to do some config settings?...
    Thanks!
    Darryl

    Hi there,
    Both 0FI_AP_4 and 0FI_AR_4 need to be run only after running 0FI_GL_4.
    Did you run 0FI_GL_4 first?
    Check in here:
    [http://help.sap.com/saphelp_bw33/helpdata/en/3f/fbf83a0dfa5102e10000000a114084/content.htm]
    In particular it says this:
    Features of the Extractor
    As the strict connectoin of DataSources has been abolished as of PlugIn2002.2, you can now load 0FI_GL_4, 0FI_AR_4, 0FI_AP_4 and 0FI_TX_4 in any order you like. You can also use DataSources 0FI_AR_4, 0FI_AP_4 and 0FI_TX_4 separately without 0FI_GL_4. The DataSources can then be used independently of one another (see note 551044).
    Note
    As soon as DataSource 0FI_GL_4 has been loaded, it is the leading DataSource with regard to accruals, such as the time and date (CPU date) by which extraction should be carried out.
    From the transition of the data requirements in Delta-Init-Modus to Delta operation therefore, the data requirement in Delta mode 0FI_GL_4 should take place first.
    Connecting the DataSources in delta operation helps to ensure consistent extraction of FI data. Consistency must be ensured using the Init selection criteria.
    Diogo.
    Edited by: Diogo Ferreira on May 27, 2009 4:05 PM

  • 0FI_AR_4 & SPECIFICALLY BWFI_AEDAT

    Hi All,
    I have read so much about 0Fi_ar_4,but 1 doubt still prevails.
    A few information:
    1.0Fi_ar_4 extractor in my project is still not minute based.
    2.The extraction takes place after 2 GMT.Loading is always once a day.
    Confusion:
    I have understood that,when the I run the load suppose at 3GMT (6.12.2007),the function module of the datasource 0fi_ar_4,gets data from the table BWFI_AEDAT & sends it to BW.
    But when I check the table BWFI_AEDAT with selection (after the load is completed):
    changed on : 5.12.2007....
    I get the number of records = 80000
    & the number of records extracted in the corresponding ODS = 99000.
    Can any one explain me why do I get this difference.
    I want to thank chetan Patel,as I am following his answers regularly & that has clarified a lot of doubts.

    Hi Guys,
    Can anyone please clarify me.....Or atleast tell me if my understanding is wrong....
    Regards
    Kishore

  • Time stamp table in fi?

    plz any  sent the detail information  i am not get the detail information in forum
    1--different time stamp table used in fi extraction?
    2--different field under the table and it description   i mean  we need which table in which scenario  and under that table we  have to work in which field?that field  description?

    You can get the global settings with respect to FI data source time stamps in BWOM_SETTINGS and BWOM2_TIMEST table at ECC system side.
    Recording of all changed FI documents (FI document key, date of last change) in table BWFI_AEDAT.
    Check for the below notes
    Note 860500 - BW-BCT-FI: Repeating delta extractions
    Note 991429 - Minute Based extraction enhancement for 0FI_*_4 extractors
    Time stamp mechanism is clearly explained in the below link
    Line Item Level Data Extraction for Financial Accounting and Controlling

  • Keeping no of days less than 15 in field DELTIMESTof BWOM_SETTINGS table

    Hello Experts,
    Is it possible to keep no of days less than 15 in field DELTIMEST of BWOM_SETTINGS table so as to keep data in BWFI_AEDAT only for 2 days.
    Also I tried but its not working,it still keeping data of last 15 days.
    Regards,
    Ankush Sharma

    Hi Ankush,
    DELTIMEST is the minimum retention period of time stamp tables.
    As per SAP logic, the minimum retention period for entries in the time stamp table is 15 days. So if you decrease the parameter value less than 15, SAP will automatically take the minimum value as 15 and ignore the settings. That is why you are not able to delete the entries less than 15 days.
    Ideally, when entries are deleted from table BWOM2_TIMEST, the corresponding entries for the changed line items are deleted simultaneously from log table BWFI_AEDAT, BWFI_AEDA2 and BWFI_AEDA3.
    BWFI_AEDAT table contains list of changed documents with the change date and the document number which helps in the delta extraction. SAP has recommended DELTIMEST value to be >= 60.
    Refer the below SAP note: (check BWOM_SETTINGS part in solution)
    1012874 - FAQ extractors 0FI_AP_* 0FI_AR_* 0FI_GL_*(except 10)
    Let me know if there are any questions.
    Thanks
    Amit

  • Cash Journal

    Dear Guru,
                  i m working with ECC 6.0 version. posting through T.C. FBCJ  register is updating but while posting  to accounting i m getting following error
    Runtime Errors         DBIF_RSQL_INVALID_REQUEST
    Date and Time          20.06.2007 11:28:46
    Short text
    Invalid request.
    What happened?
    The current ABAP/4 program terminated due to
    an internal error in the database interface.
    What can you do?
    Note which actions and input led to the error.
    For further help in handling the problem, contact your SAP administrator
    You can use the ABAP dump analysis transaction ST22 to view and manage
    termination messages, in particular for long term reference.
    Error analysis
    An invalid request was made to the SAP database interface in a statement
    in which the table "BSEG " was accessed.
    The situation points to an internal error in the SAP software
    or to an incorrect status of the respective work process.
    For further analysis the SAP system log should be examined
    (transaction SM21).
    For a precise analysis of the error, you should supply
    documents with as many details as possible.
    How to correct the error
    Start the work process involved and repeat the action that lead to the
    error.
    If the error position is in a program that you can change, you can try
    to create preliminary solution:  Reformulate the database command by
    varying the properties such as individual record access instead of
    input/output via internal tables, the structure of the selection
    conditions (WHERE clause), nested SELECT loops instead of FOR ALL
    ENTRIES and other such variations.
    Please check the entries in the system log (Transaction SM21).
    Check the entries in the developer trace of the work process involved
    (transaction ST11).
    If the error occures in a non-modified SAP program, you may be able to
    find an interim solution in an SAP Note.
    If you have access to SAP Notes, carry out a search with the following
    keywords:
    "DBIF_RSQL_INVALID_REQUEST" " "
    "SAPLF005" or "LF005S01"
    "BSEG_INSERT"
    If you cannot solve the problem yourself and want to send an error
    notification to SAP, include the following information:
    1. The description of the current problem (short dump)
    To save the description, choose "System->List->Save->Local File
    (Unconverted)".
    2. Corresponding system log
    Display the system log by calling transaction SM21.
    Restrict the time interval to 10 minutes before and five minutes
    after the short dump. Then choose "System->List->Save->Local File
    (Unconverted)".
    3. If the problem occurs in a problem of your own or a modified SAP
    program: The source code of the program
    In the editor, choose "Utilities->More
    Utilities->Upload/Download->Download".
    4. Details about the conditions under which the error occurred or which
    actions and input led to the error.
    System environment
    SAP-Release 700
    Application server... "DEVLSL"
    Network address...... "124.124.124.2"
    Operating system..... "HP-UX"
    Release.............. "B.11.23"
    Hardware type........ "ia64"
    Character length.... 8 Bits
    Pointer length....... 64 Bits
    Work process number.. 4
    Shortdump setting.... "full"
    Database server... "DEVLSL"
    Database type..... "ORACLE"
    Database name..... "DEV"
    Database user ID.. "SAPR3"
    Char.set.... "en_US.iso88591"
    SAP kernel....... 700
    created (date)... "Apr 2 2006 21:28:22"
    create on........ "HP-UX B.11.23 U ia64"
    Database version. "OCI_102 (10.2.0.1.0) "
    Patch level. 52
    Patch text.. " "
    Database............. "ORACLE 9.2.0.., ORACLE 10.1.0.., ORACLE 10.2.0.."
    SAP database version. 700
    Operating system..... "HP-UX B.11"
    Memory consumption
    Roll.... 16128
    EM...... 12569712
    Heap.... 0
    Page.... 81920
    MM Used. 925888
    MM Free. 3261456
    User and Transaction
    Client.............. 123
    User................ "NBS"
    Language key........ "E"
    Transaction......... "FBCJ "
    Program............. "SAPLF005"
    Screen.............. "SAPMSSY0 1000"
    Screen line......... 6
    Information on where terminated
    Termination occurred in the ABAP program "SAPLF005" - in "BSEG_INSERT".
    The main program was "SAPMSSY4 ".
    In the source code you have the termination point in line 276
    of the (Include) program "LF005S01".
    Source Code Extract
    Line
    SourceCde
    246
    *eject
    247
    ------- BSED_INSERT -
    248
    FORM BSED_INSERT.
    249
    INSERT BSED.
    250
    IF SY-SUBRC NE 0.
    251
    MESSAGE A108 WITH BSED-BUZEI BSED-BUKRS BSED-BELNR BSED-GJAHR.
    252
    ENDIF.
    253
    ENDFORM.                    "BSED_INSERT
    254
    255
    ------- BSED_UPDATE -
    256
    FORM BSED_UPDATE.
    257
    UPDATE  BSED.
    258
    IF SY-SUBRC NE 0.
    259
    MESSAGE A111 WITH BSED-BUZEI BSED-BUKRS BSED-BELNR BSED-GJAHR.
    260
    ENDIF.
    261
    ENDFORM.                    "BSED_UPDATE
    262
    263
    ------- BSED_READ -
    264
    FORM BSED*_READ.
    265
    SELECT SINGLE * INTO *BSED
    266
    FROM BSED
    267
    WHERE BUKRS = *BKPF-BUKRS
    268
    AND   GJAHR = *BKPF-GJAHR
    269
    AND   BELNR = *BKPF-BELNR
    270
    AND   BUZEI = *BSEG-BUZEI.
    271
    ENDFORM.                    "BSED*_READ
    272
    273
    *eject
    274
    ------- BSEG_INSERT -
    275
    FORM BSEG_INSERT.
    >>>>>
    INSERT BSEG.
    277
    IF SY-SUBRC NE 0.
    278
    MESSAGE A109 WITH BSEG-BUZEI BSEG-BUKRS BSEG-BELNR BSEG-GJAHR.
    279
    ENDIF.
    280
    ENDFORM.                    "BSEG_INSERT
    281
    282
    ------- BSEG_UPDATE -
    283
    FORM BSEG_UPDATE.
    284
    UPDATE  BSEG.
    285
    IF SY-SUBRC NE 0.
    286
    MESSAGE A112 WITH BSEG-BUZEI BSEG-BUKRS BSEG-BELNR BSEG-GJAHR.
    287
    ENDIF.
    288
    ...XPU050601: start insert note 401646...............................
    289
    ...write entry in table BWFI_AEDAT...................................
    290
    IF SY-SUBRC = 0.
    291
    CALL FUNCTION 'OPEN_FI_PERFORM_00005011_P'
    292
    EXPORTING
    293
    I_CHGTYPE   = 'U'
    294
    I_ORIGIN    = 'LF005S01 BSEG_UPDATE'
    295
    I_TABNAME   = 'BSEG'
    Contents of system fields
    Name
    Val.
    SY-SUBRC
    0
    SY-INDEX
    0
    SY-TABIX
    1
    SY-DBCNT
    1
    SY-FDPOS
    0
    SY-LSIND
    0
    SY-PAGNO
    0
    SY-LINNO
    1
    SY-COLNO
    1
    SY-PFKEY
    SY-UCOMM
    SY-TITLE
    Update Control
    SY-MSGTY
    E
    SY-MSGID
    B2
    SY-MSGNO
    001
    SY-MSGV1
    SY-MSGV2
    SY-MSGV3
    SY-MSGV4
    SY-MODNO
    3
    SY-DATUM
    20070620
    SY-UZEIT
    112845
    SY-XPROG
    RSDBRUNT
    SY-XFORM
    %_INIT_PBO_FIRST
    Active Calls/Events
    No.   Ty.          Program                             Include                             Line
    Name
    7 FORM         SAPLF005                            LF005S01                              276
    BSEG_INSERT
    6 FORM         SAPLF005                            LF005F01                              123
    BELEG_SCHREIBEN
    5 FUNCTION     SAPLF005                            LF005U01                              527
    POST_DOCUMENT
    4 FORM         SAPLF005                            LF005U01                                1
    POST_DOCUMENT
    3 FORM         SAPMSSY4                            SAPMSSY4                              111
    %_UPDATES_NO_UTASK
    2 FORM         SAPMSSY4                            SAPMSSY4                               35
    LOCAL_UPDATE_TASK
    1 EVENT        SAPMSSY4                            SAPMSSY4                               20
    START-OF-SELECTION
    Chosen variables
    Name
    Val.
    No.       7 Ty.          FORM
    Name  BSEG_INSERT
    BSIK
    00000000                            0000          0000000000000000000000000
    2222222222222222222333333332222222222222222222222222222333322222222223333333333333333333333333
    0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    <%_TABLE_BSEG>
    STODT
    00000000
    33333333
    00000000
    BSEG
    123100001000000302007001 0000000000000000          50S   H1100        ########################
    3333333333333333333333332333333333333333322222222223352224333322222222000000000000000000000000
    1231000010000003020070010000000000000000000000000005030008110000000000000200C000200C000000C000
    KGJAHR
    2007
    3333
    2007
    SY-SUBRC
    0
    0000
    0000
    IN_ARBEIT
    2
    0
    SY-REPID
    SAPLF005
    5454433322222222222222222222222222222222
    310C600500000000000000000000000000000000
    SY-MSGID
    B2
    43222222222222222222
    22000000000000000000
    BSIM
    0000000  ##############   0000000000000000
    22222222222222222222222222222222222222222222233333332200000000000000222333333333333333322
    000000000000000000000000000000000000000000000000000000000000C000000C000000000000000000000
    REGUV
    00000000           ########00000000
    22233333333222222222220000000033333333222222222222
    0000000000000000000000000C000C00000000000000000000
    SPACE
    2
    0
    SY-MSGNO
    001
    333
    001
    XWANF-BUKRS
    2222
    0000
    SY-MSGV1
    22222222222222222222222222222222222222222222222222
    00000000000000000000000000000000000000000000000000
    BSEG-BUZEI
    001
    333
    001
    SY-MSGV2
    22222222222222222222222222222222222222222222222222
    00000000000000000000000000000000000000000000000000
    BSEG-BUKRS
    1000
    3333
    1000
    SY-MSGV3
    22222222222222222222222222222222222222222222222222
    00000000000000000000000000000000000000000000000000
    BSEG-BELNR
    0100000030
    3333333333
    0100000030
    SY-MSGV4
    22222222222222222222222222222222222222222222222222
    00000000000000000000000000000000000000000000000000
    BSEG-GJAHR
    2007
    3333
    2007
    No.       6 Ty.          FORM
    Name  BELEG_SCHREIBEN
    BKPF-PPNAM
    222222222222
    000000000000
    SYST-REPID
    SAPLF005
    5454433322222222222222222222222222222222
    310C600500000000000000000000000000000000
    YBSET[]
    Table[initial]
    DBSID
    Table[initial]
    IBKPF
    Table[initial]
    BKPF-BELNR
    0100000030
    3333333333
    0100000030
    BSEG-AUGGJ
    0000
    3333
    0000
    DBSIX
    Table[initial]
    BKPF-BUKRS
    1000
    3333
    1000
    IAGKO
    Table[initial]
    BKPF-GJAHR
    2007
    3333
    2007
    SPLIT_DATA
    2
    0
    %_DUMMY$$
    2222
    0000
    XWANF-KOBUK
    2222
    0000
    YBSEG[]
    Table IT_33[2x1888]
    \FUNCTION-POOL=F005\DATA=YBSEG[]
    Table reference: 30
    TABH+  0(20) = C0000000EF058090000000000000000000000000
    TABH+ 20(20) = 0000001E00000021000000020000076000000130
    TABH+ 40(16) = 0400002600000E20000824C001800000
    store        = 0xC0000000EF058090
    ext1         = 0x0000000000000000
    shmId        = 0     (0x00000000)
    id           = 30    (0x0000001E)
    label        = 33    (0x00000021)
    fill         = 2     (0x00000002)
    leng         = 1888  (0x00000760)
    loop         = 304   (0x00000130)
    xtyp         = TYPE#000054
    occu         = 8     (0x00000008)
    access       = 1     (ItAccessStandard)
    idxKind      = 0     (ItIndexNone)
    uniKind      = 2     (ItUniqueNon)
    keyKind      = 1     (default)
    cmpMode      = 8     (cmpManyEq)
    occu0        = 0
    groupCntl    = 0
    rfc          = 0
    unShareable  = 0
    mightBeShared = 0
    sharedWithShmTab = 0
    isShmLockId  = 0
    gcKind       = 0
    isUsed       = 1
    isCtfyAble   = 1
    >>>>> Shareable Table Header Data <<<<<
    tabi         = 0xC0000000EF0718B0
    pgHook       = 0x0000000000000000
    idxPtr       = 0x0000000000000000
    shmTabhSet   = 0x0000000000000000
    id           = 15    (0x0000000F)
    refCount     = 0     (0x00000000)
    tstRefCount  = 0     (0x00000000)
    lineAdmin    = 8     (0x00000008)
    lineAlloc    = 8     (0x00000008)
    shmVersId    = 0     (0x00000000)
    shmRefCount  = 1     (0x00000001)
    >>>>> 1st level extension part <<<<<
    regHook      = Not allocated
    collHook     = Not allocated
    ext2         = Not allocated
    >>>>> 2nd level extension part <<<<<
    tabhBack     = Not allocated
    delta_head   = Not allocated
    pb_func      = Not allocated
    pb_handle    = Not allocated
    YBSEG
    123100001000000302007001 0000000000000000          50S   H1100        ########################
    3333333333333333333333332333333333333333322222222223352224333322222222000000000000000000000000
    1231000010000003020070010000000000000000000000000005030008110000000000000200C000200C000000C000
    IBSID
    Table[initial]
    YBSEC[]
    Table[initial]
    YBSEC
    0000000
    2222222222222222233333332222222222222222222222222222222222222222222222222222222222222222222222
    0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    BSIX-ZUONR
    222222222222222222
    000000000000000000
    BSEC
    0000000
    2222222222222222233333332222222222222222222222222222222222222222222222222222222222222222222222
    0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    YBSED[]
    Table[initial]
    YBSED
    0000000## ############################  0000000000000000
    2222222222222222233333330020000000000000000000000000000223333333333333333222222222222222222222
    0000000000000000000000000C0000000C000000C000000C000000C000000000000000000000000000000000000000
    No.       5 Ty.          FUNCTION
    Name  POST_DOCUMENT
    XWANF-KOART
    2
    0
    <%_L007>-GJAHR
    <%_L00A>
    XHWAE
    X
    5
    8
    YBSET-HWSTE
    0000000
    000000C
    YBSET-HWBAS
    00000000
    0000000C
    XHWA2
    2
    0
    YBSET-H2STE
    0000000
    000000C
    YBSET-H2BAS
    00000000
    0000000C
    XHWA3
    2
    0
    YBSET-H3STE
    0000000
    000000C
    YBSET-H3BAS
    00000000
    0000000C
    YBSET
    0000000            000 ##############################
    2222222222222222233333332222222222223332000000000000000000000000000000222222222222222222222222
    00000000000000000000000000000000000000000000000C0000000C000000C000000C000000000000000000000000
    CCARDEC
    000########################################################
    2222222222222222222222222222222222233300000000000000000000000000000000000000000000000000000000
    0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    BKPF-STBLG
    2222222222
    0000000000
    XFMCL
    0000000
    222222222222223333333222222222222222222222222222222
    000000000000000000000000000000000000000000000000000
    UF05A-XSTWV
    2
    0
    KNB1
    0000000000000000
    2222222222222222233333333333333332222222222222222222222222222222222222222222222222222222222222
    0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    BKPF-BSTAT
    2
    0
    No.       4 Ty.          FORM
    Name  POST_DOCUMENT
    *BKPF
    0000  00000000000000000000000000000000000000000000000000000000
    2222222222222222233332233333333333333333333333333333333333333333333333333333333222222222222222
    0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    %_%_T_BKP1
    Table IT_14[1x4]
    \FUNCTION-POOL=F005\FORM=POST_DOCUMENT\DATA=%_%_T_BKP1
    Table reference: 11
    TABH+  0(20) = C0000000EF018FA0C0000000EF01E29000000000
    TABH+ 20(20) = 0000000B0000000E0000000100000004FFFFFFFF
    TABH+ 40(16) = 04000026000039700001249401800000
    store        = 0xC0000000EF018FA0
    ext1         = 0xC0000000EF01E290
    shmId        = 0     (0x00000000)
    id           = 11    (0x0000000B)
    label        = 14    (0x0000000E)
    fill         = 1     (0x00000001)
    leng         = 4     (0x00000004)
    loop         = -1    (0xFFFFFFFF)
    xtyp         = TYPE#000252
    occu         = 1     (0x00000001)
    access       = 1     (ItAccessStandard)
    idxKind      = 0     (ItIndexNone)
    uniKind      = 2     (ItUniqueNon)
    keyKind      = 1     (default)
    cmpMode      = 2     (cmpSingleMcmpR)
    occu0        = 1
    groupCntl    = 0
    rfc          = 0
    unShareable  = 0
    mightBeShared = 0
    sharedWithShmTab = 0
    isShmLockId  = 0
    gcKind       = 0
    isUsed       = 1
    isCtfyAble   = 1
    >>>>> Shareable Table Header Data <<<<<
    tabi         = 0xC0000000EF019000
    pgHook       = 0x0000000000000000
    idxPtr       = 0x0000000000000000
    shmTabhSet   = 0x0000000000000000
    id           = 8     (0x00000008)
    refCount     = 0     (0x00000000)
    tstRefCount  = 0     (0x00000000)
    lineAdmin    = 1     (0x00000001)
    lineAlloc    = 1     (0x00000001)
    shmVersId    = 0     (0x00000000)
    shmRefCount  = 1     (0x00000001)
    >>>>> 1st level extension part <<<<<
    regHook      = 0x0000000000000000
    collHook     = 0x0000000000000000
    ext2         = 0xC0000000EF01E220
    >>>>> 2nd level extension part <<<<<
    tabhBack     = 0xC0000000EF018F40
    delta_head   = 000000000000000000000000000000000000000000000000000000000000000000000000
    pb_func      = 0x0000000000000000
    pb_handle    = 0x0000000000000000
    BKP1
    2007
    3333
    2007
    %_%_T_BKPF
    Table IT_15[1x685]
    \FUNCTION-POOL=F005\FORM=POST_DOCUMENT\DATA=%_%_T_BKPF
    Table reference: 12
    TABH+  0(20) = C0000000EF019980C0000000EF01E34000000000
    TABH+ 20(20) = 0000000C0000000F00000001000002AD00000098
    TABH+ 40(16) = 04000026000039A8000124C421800000
    store        = 0xC0000000EF019980
    ext1         = 0xC0000000EF01E340
    shmId        = 0     (0x00000000)
    id           = 12    (0x0000000C)
    label        = 15    (0x0000000F)
    fill         = 1     (0x00000001)
    leng         = 685   (0x000002AD)
    loop         = 152   (0x00000098)
    xtyp         = TYPE#000253
    occu         = 1     (0x00000001)
    access       = 1     (ItAccessStandard)
    idxKind      = 0     (ItIndexNone)
    uniKind      = 2     (ItUniqueNon)
    keyKind      = 1     (default)
    cmpMode      = 8     (cmpManyEq)
    occu0        = 1
    groupCntl    = 0
    rfc          = 0
    unShareable  = 0
    mightBeShared = 1
    sharedWithShmTab = 0
    isShmLockId  = 0
    gcKind       = 0
    isUsed       = 1
    isCtfyAble   = 1
    >>>>> Shareable Table Header Data <<<<<
    tabi         = 0xC0000000EF0199E0
    pgHook       = 0x0000000000000000
    idxPtr       = 0x0000000000000000
    shmTabhSet   = 0x0000000000000000
    id           = 9     (0x00000009)
    refCount     = 1     (0x00000001)
    tstRefCount  = 0     (0x00000000)
    lineAdmin    = 1     (0x00000001)
    lineAlloc    = 1     (0x00000001)
    shmVersId    = 0     (0x00000000)
    shmRefCount  = 2     (0x00000002)
    >>>>> 1st level extension part <<<<<
    regHook      = 0x0000000000000000
    collHook     = 0x0000000000000000
    ext2         = 0xC0000000EF01E2D0
    >>>>> 2nd level extension part <<<<<
    tabhBack     = 0xC0000000EF019920
    delta_head   = 000000000000000000000000000000000000000000000000000000000000000000000000
    pb_func      = 0x0000000000000000
    pb_handle    = 0x0000000000000000
    %_%_T_BSEC
    Table IT_16[0x532]
    \FUNCTION-POOL=F005\FORM=POST_DOCUMENT\DATA=%_%_T_BSEC
    Table reference: 13
    TABH+  0(20) = 0000000000000000C0000000EF01E3F000000000
    TABH+ 20(20) = 0000000D000000100000000000000214FFFFFFFF
    TABH+ 40(16) = 04000026000039E00010249401800000
    store        = 0x0000000000000000
    ext1         = 0xC0000000EF01E3F0
    shmId        = 0     (0x00000000)
    id           = 13    (0x0000000D)
    label        = 16    (0x00000010)
    fill         = 0     (0x00000000)
    leng         = 532   (0x00000214)
    loop         = -1    (0xFFFFFFFF)
    xtyp         = TYPE#000254
    occu         = 16    (0x00000010)
    access       = 1     (ItAccessStandard)
    idxKind      = 0     (ItIndexNone)
    uniKind      = 2     (ItUniqueNon)
    keyKind      = 1     (default)
    cmpMode      = 2     (cmpSingleMcmpR)
    occu0        = 1
    groupCntl    = 0
    rfc          = 0
    unShareable  = 0
    mightBeShared = 0
    sharedWithShmTab = 0
    isShmLockId  = 0
    gcKind       = 0
    isUsed       = 1
    isCtfyAble   = 1
    >>>>> Shareable Table Header Data <<<<<
    tabi         = Not allocated
    pghook       = Not allocated
    idxPtr       = Not allocated
    shmTabhSet   = Not allocated
    id           = Not allocated
    refCount     = Not allocated
    tstRefCount  = Not allocated
    lineAdmin    = Not allocated
    lineAlloc    = Not allocated
    shmVersId    = Not allocated
    shmRefCount  = Not allocated
    shmIsReadOnly = Not allocated
    >>>>> 1st level extension part <<<<<
    regHook      = 0x0000000000000000
    collHook     = 0x0000000000000000
    ext2         = 0xC0000000EF01E380
    >>>>> 2nd level extension part <<<<<
    tabhBack     = 0xC0000000EF01A360
    delta_head   = 000000000000000000000000000000000000000000000000000000000000000000000000
    pb_func      = 0x0000000000000000
    pb_handle    = 0x0000000000000000
    *BSEG
    0000000 0000000000000000                             ########################
    2222222222222222233333332333333333333333322222222222222222222222222222000000000000000000000000
    0000000000000000000000000000000000000000000000000000000000000000000000000000C000000C000000C000
    %_%_T_BSED
    Table IT_17[0x404]
    \FUNCTION-POOL=F005\FORM=POST_DOCUMENT\DATA=%_%_T_BSED
    Table reference: 14
    TABH+  0(20) = 0000000000000000C0000000EF01E4A000000000
    TABH+ 20(20) = 0000000E000000110000000000000194FFFFFFFF
    TABH+ 40(16) = 0400002600003A18001024C401800000
    store        = 0x0000000000000000
    ext1         = 0xC0000000EF01E4A0
    shmId        = 0     (0x00000000)
    id           = 14    (0x0000000E)
    label        = 17    (0x00000011)
    fill         = 0     (0x00000000)
    leng         = 404   (0x00000194)
    loop         = -1    (0xFFFFFFFF)
    xtyp         = TYPE#000255
    occu         = 16    (0x00000010)
    access       = 1     (ItAccessStandard)
    idxKind      = 0     (ItIndexNone)
    uniKind      = 2     (ItUniqueNon)
    keyKind      = 1     (default)
    cmpMode      = 8     (cmpManyEq)
    occu0        = 1
    groupCntl    = 0
    rfc          = 0
    unShareable  = 0
    mightBeShared = 0
    sharedWithShmTab = 0
    isShmLockId  = 0
    gcKind       = 0
    isUsed       = 1
    isCtfyAble   = 1
    >>>>> Shareable Table Header Data <<<<<
    tabi         = Not allocated
    pghook       = Not allocated
    idxPtr       = Not allocated
    shmTabhSet   = Not allocated
    id           = Not allocated
    refCount     = Not allocated
    tstRefCount  = Not allocated
    lineAdmin    = Not allocated
    lineAlloc    = Not allocated
    shmVersId    = Not allocated
    shmRefCount  = Not allocated
    shmIsReadOnly = Not allocated
    >>>>> 1st level extension part <<<<<
    regHook      = 0x0000000000000000
    collHook     = 0x0000000000000000
    ext2         = 0xC0000000EF01E430
    >>>>> 2nd level extension part <<<<<
    tabhBack     = 0xC0000000EF01A960
    delta_head   = 000000000000000000000000000000000000000000000000000000000000000000000000
    pb_func      = 0x0000000000000000
    pb_handle    = 0x0000000000000000
    BSEGC
    0000000                                       0000000000000000
    2222222222222222233333332222222222222222222222222222222222222223333333333333333222222222222222
    0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
    %_%_T_BSEG
    Table IT_18[2x1888]
    \FUNCTION-POOL=F005\FORM=POST_DOCUMENT\DATA=%_%_T_BSEG
    Table reference: 15
    TABH+  0(20) = C0000000EF01B760C0000000EF01E55000000000
    TABH+ 20(20) = 0000000F000000120000000200000760FFFFFFFF
    TABH+ 40(16) = 0400002600003A50000224C401800000
    store        = 0xC0000000EF01B760
    ext1         = 0xC0000000EF01E550
    shmId        = 0     (0x00000000)
    id           = 15    (0x0000000F)
    label        = 18    (0x00000012)
    fill         = 2     (0x00000002)
    leng         = 1888  (0x00000760)
    loop         = -1    (0xFFFFFFFF)
    xtyp         = TYPE#000256
    occu         = 2     (0x00000002)
    access       = 1     (ItAccessStandard)
    idxKind      = 0     (ItIndexNone)
    uniKind      = 2     (ItUniqueNon)
    keyKind      = 1     (default)
    cmpMode      = 8     (cmpManyEq)
    occu0        = 1
    groupCntl    = 0
    rfc          = 0
    unShareable  = 0
    mightBeShared = 0
    sharedWithShmTab = 0
    isShmLockId  = 0
    gcKind       = 0
    isUsed       = 1
    isCtfyAble   = 1
    >>>>> Shareable Table Header Data <<<<<
    tabi         = 0xC0000000EF01B7C0
    pgHook       = 0x0000000000000000
    idxPtr       = 0x0000000000000000
    shmTabhSet   = 0x0000000000000000
    id           = 10    (0x0000000A)
    refCount     = 0     (0x00000000)
    tstRefCount  = 0     (0x00000000)
    lineAdmin    = 2     (0x00000002)
    lineAlloc    = 2     (0x00000002)
    shmVersId    = 0     (0x00000000)
    shmRefCount  = 1     (0x00000001)
    >>>>> 1st level extension part <<<<<
    regHook      = 0x0000000000000000
    collHook     = 0x0000000000000000
    ext2         = 0xC0000000EF01E4E0
    >>>>> 2nd level extension part <<<<<
    tabhBack     = 0xC0000000EF01B700
    delta_head   = 000000000000000000000000000000000000000000000000000000000000000000000000
    pb_func      = 0x0000000000000000
    pb_handle    = 0x0000000000000000
    %_%_T_BSET
    Table IT_21[0x240]
    \FUNCTION-POOL=F005\FORM=POST_DOCUMENT\DATA=%_%_T_BSET
    Table reference: 18
    TABH+  0(20) = 0000000000000000C0000000EF01E6B000000000
    TABH+ 20(20) = 000000120000001500000000000000F0FFFFFFFF
    TABH+ 40(16) = 0400002600003A88001024C401800000
    store        = 0x0000000000000000
    ext1         = 0xC0000000EF01E6B0
    shmId        = 0     (0x00000000)
    id           = 18    (0x00000012)
    label        = 21    (0x00000015)
    fill         = 0     (0x00000000)
    leng         = 240   (0x000000F0)
    loop         = -1    (0xFFFFFFFF)
    xtyp         = TYPE#000257
    occu         = 16    (0x00000010)
    access       = 1     (ItAccessStandard)
    idxKind      = 0     (ItIndexNone)
    uniKind      = 2     (ItUniqueNon)
    keyKind      = 1     (default)
    cmpMode      = 8     (cmpManyEq)
    occu0        = 1
    groupCntl    = 0
    rfc          = 0
    unShareable  = 0
    mightBeShared = 0
    sharedWithShmTab = 0
    isShmLockId  = 0
    gcKind       = 0
    isUsed       = 1
    isCtfyAble   = 1
    >>>>> Shareable Table Header Data <<<<<
    tabi         = Not allocated
    pghook       = Not allocated
    idxPtr       = Not allocated
    shmTabhSet   = Not allocated
    id           = Not allocated
    refCount     = Not allocated
    tstRefCount  = Not allocated
    lineAdmin    = Not allocated
    lineAlloc    = Not allocated
    shmVersId    = Not allocated

    Hello
    This is a run time error which happens because of various reasons.
    Ideally, execute the transaction once again and if error gets repeated, immediately go to SU53 and take a screen shot.
    This screen shots shows the T-code and some error details, which assist the BASIS team to analyse and fix.
    If still persists, check with ABAPer.
    Reg
    *Assign points if useful

  • Error while data loading

    Hi Gurus,
    I am getting error while data loading. At BI side when I  check the error it says Background Job Cancelled and when I check in R3 side I got the following error
    Job started
    Step 001 started (program SBIE0001, variant &0000000065503, user ID R3REMOTE)
    Asynchronous transmission of info IDoc 2 in task 0001 (0 parallel tasks)
    DATASOURCE = 2LIS_11_V_ITM
             Current Values for Selected Profile Parameters               *
    abap/heap_area_nondia......... 2000683008                              *
    abap/heap_area_total.......... 4000317440                              *
    abap/heaplimit................ 40894464                                *
    zcsa/installed_languages...... ED                                      *
    zcsa/system_language.......... E                                       *
    ztta/max_memreq_MB............ 2047                                    *
    ztta/roll_area................ 6500352                                 *
    ztta/roll_extension........... 3001024512                              *
    4 LUWs confirmed and 4 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA
    ABAP/4 processor: DBIF_RSQL_SQL_ERROR
    Job cancelled
    Please help me out what should I do.
    Regards,
    Mayank

    Hi Mayank,
    The log says it went to short dump due to temp space issue.as its the source system job ,check in the source system side for temp table space and also check at the BI side as well.
    Check with your basis regarding the TEMP PSA table space - if its out of space ask them to increase the table space and try to repeat the load.
    Check the below note
    Note 796422 - DBIF_RSQL_SQL_ERROR during deletion of table BWFI_AEDAT
    Regards
    KP
    Edited by: prashanthk on Jul 19, 2010 10:42 AM

  • 0FI_AR_4 not updating item status

    Hi !
    I am facing a problem with extractor 0FI_AR_4. When loading a delta load with clearing docs the status of the document cleared by this delta is still O.
    I can find the clearing doc in the ODS, but the cleared one is still the same, with blank clearing doc and clearing date.
    Is this correct ? Shouldn't the cleared doc appear as cleared with its corresponding cleareing doc and clearing date ?
    edit: I have just found an OSS Note 1051099 - BUC: Change to clearing information not reported to BW, but I know for certain that the clearing process is made through SAP standard transactions and not the program mentioned in the note. But the problem is the same. The chage of status that should update the record in the ODS is not beign generated, so I have only two records in PSA, one for the original document and another for the clearing doc, but there should be a third one for the update of original doc with status changed to C. Am I right ? Has this happened to anyone else out there ?
    Please tell me where to start looking for answers !!
    Regards,
    Santiago
    Edited by: Santiago Reig on Mar 11, 2008 7:32 PM

    Hi..
    For FI datasources you will not be able to see any data in the delta queue..
    Data will get populated in the delta queue during extraction..
    After init...it writes the entries into the time stamp table BWOM2_TIMEST in the SAP R/3 System with a new upper limit for the time stamp selection...
    Depending on that data will get fetched from the table BKPF based on creation date (CPUDT)...
    For 0FI_AR_4 (AR- line items )Selections are made from tables BSID/BSAD (Accounts Receivable)
    This is for new entries.......
    For changed records data will get fetched from the table BWFI_AEDAT , depending on the time stamp table BWOM2_TIMEST....
    Check the following blog :
    /people/swapna.gollakota/blog/2008/01/14/one-stage-stop-to-know-all-about-bw-extractors-part1
    Regards,
    Debjani.........

  • 0FI_AR_4 not fetching delta

    hi experts,
    i am having problem with 0fi_ar_4, i have run the init with data transfer for this datasource and it went successfully, But delta values are not able to fetch, delta infopackage brings 0 records with green status.
    In rsa7 there are no records for this datasource. I have checked in smq1 also no luws are stuck there
    I have 0fi_gl_4 data source which is working fine with full and as well as delta.
    I have checked in bwom2_timest which has last updated information for  0fi_ar_4 and delta also. But the values in delta run are not fetching.
    The same data source is working fine in development system. I can able to see delta records in rsa7 against this 0fi_ar_4 DS.
    please suggest me ..

    Hi..
    For FI datasources you will not be able to see any data in the delta queue..
    Data will get populated in the delta queue during extraction..
    After init...it writes the entries into the time stamp table BWOM2_TIMEST in the SAP R/3 System with a new upper limit for the time stamp selection...
    Depending on that data will get fetched from the table BKPF based on creation date (CPUDT)...
    For 0FI_AR_4 (AR- line items )Selections are made from tables BSID/BSAD (Accounts Receivable)
    This is for new entries.......
    For changed records data will get fetched from the table BWFI_AEDAT , depending on the time stamp table BWOM2_TIMEST....
    Check the following blog :
    /people/swapna.gollakota/blog/2008/01/14/one-stage-stop-to-know-all-about-bw-extractors-part1
    Regards,
    Debjani.........

  • 0FI_AR_4 Datasource, Delta

    Hi Experts,
    we are using 0FI_AR_4 datasource, this is delta enable, but the problem is we can run delta just once a day.
    Can any one please let me know how to change this so that i can run the delta more than once a day.
    Any document or a link would be of great help.
    Thanks in advance.
    Ananth

    hi Ananth,
    take a look Note 991429 - Minute Based extraction enhancement for 0FI_*_4 extractors
    https://websmp203.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=991429&_NLANG=E
    Symptom
    You would like to implement a 'minute based' extraction logic for the data sources 0FI_GL_4, 0FI_AR_4 and 0FI_AP_4.
    Currently the extraction logic allows only for an extraction once per day without overlap.
    Other terms
    general ledger  0FI_GL_4  0FI_AP_4  0FI_AR_4  extraction  performance
    Reason and Prerequisites
    1. There is huge volume of data to be extracted on a daily basis from FI to BW and this requires lot of time.
    2. You would like to extract the data at a more frequent intervals in a day like 3-4 times in a day - without extracting all the data that you have already extracted on that day.
    In situations where there is a huge volume of data to be extracted, a lot of time is taken up when extracting on a daily basis. Minute based extraction would enable the extraction to be split into convenient intervals and can be run multiple times during a day. By doing so, the amount of data in each extraction would be reduced and hence the extraction can be done more effectively. This should also reduce the risk of extractor failures caused because of huge data in the system.
    Solution
    Implement the relevant source code changes and follow the instructions in order to enable minute based extraction logic for the extraction of GL data. The applicable data sources are:
                            0FI_GL_4
                            0FI_AR_4
                            0FI_AP_4
    All changes below have to be implemented first in a standard test system. The new extractor logic must be tested very carefully before it can be used in a production environment. Test cases must include all relevant processes that would be used/carried in the normal course of extraction.
    Manual changes are to be carried out before the source code changes in the correction instructions of this note.
    1. Manual changes
    a) Add the following parameters to the table BWOM_SETTINGS
                             MANDT  OLTPSOURCE    PARAM_NAME          PARAM_VALUE
                             XXX                  BWFINEXT
                             XXX                  BWFINSAF            3600
                  Note: XXX refers to the specific client(like 300) under use/test.
                  This can be achieved using using transaction 'SE16' for table
                             'BWOM_SETTINGS'
                              Menue --> Table Entry --> Create
                              --> Add the above two parameters one after another
    b) To the views BKPF_BSAD, BKPF_BSAK, BKPF_BSID, BKPF_BSIK
                           under the view fields add the below field,
                           View Field  Table    Field      Data Element  DType  Length
                           CPUTM       BKPF    CPUTM          CPUTM      TIMS   6
                           This can be achieved using transaction 'SE11' for views
                           BKPF_BSAD, BKPF_BSAK , BKPF_BSID , BKPF_BSIK (one after another)
                               --> Change --> View Fields
                               --> Add the above mentioned field with exact details
    c) For the table BWFI_AEDAT index-1  for extractors
                           add the field AETIM (apart from the existing MANDT, BUKRS, and AEDAT)
                           and activate this Non Unique index on all database systems (or at least on the database under use).
                           This can achived using transaction 'SE11' for table 'BWFI_AEDAT'
                               --> Display --> Indexes --> Index-1 For extractors
                               --> Change
                               --> Add the field AETIM to the last position (after AEDAT field )
                               --> Activate the index on database
    2. Implement the source code changes as in the note correction instructions.
    3. After implementing the source code changes using SNOTE instruction ,add the following parameters to respective function modules and activate.
    a) Function Module: BWFIT_GET_TIMESTAMPS
                        1. Export Parameter
                        a. Parameter Name  : E_TIME_LOW
                        b. Type Spec       : LIKE
                        c. Associated Type : BKPF-CPUTM
                        d. Pass Value      : Ticked/checked (yes)
                        2. Export Parameter
                        a. Parameter Name  : E_TIME_HIGH
                        b. Type Spec       : LIKE
                        c. Associated Type : BKPF-CPUTM
                        d. Pass Value      : Ticked/checked (yes)
    b) Function Module: BWFIT_UPDATE_TIMESTAMPS
                        1. Import Parameter (add after I_DATE_HIGH)
                        a. Parameter Name  : I_TIME_LOW
                        b. Type Spec       : LIKE
                        c. Associated Type : BKPF-CPUTM
                        d. Optional        : Ticked/checked (yes)
                        e. Pass Value      : Ticked/checked (yes)
                        2. Import Parameter (add after I_TIME_LOW)
                        a. Parameter Name  : I_TIME_HIGH
                        b. Type Spec       : LIKE
                        c. Associated Type : BKPF-CPUTM
                        d. Optional        : Ticked/checked (yes)
                        e. Pass Value      : Ticked/checked (yes)
    4. Working of minute based extraction logic:
                  The minute based extraction works considering the time to select the data (apart from date of the document either changed or new as in the earlier logic).The modification to the code is made such that it will consider the new flags in the BWOM_SETTINGS table ( BWFINEXT and BWFINSAF ) and the code for the earlier extraction logic will remain as it was without these flags being set as per the instructions for new logic to be used(but are modified to include new logic).
    Safety interval will now depend on the flag BWFINSAF (in seconds ; default 3600) and has  a default value of 3600 (1 hour), which would try to ensure that the documents which are delayed in posting due to delay in update modules for any reason. Also there is a specific coding to post an entry to BWFI_AEDAT with the details of the document which have failed to post within the safety limit of 1 hour and hence those would be extracted as a changed documents at least if they were missed to be extracted as new documents. If the documents which fail to ensure to post within safety limit is a huge number then the volume of BWFI_AEDAT would increase correspondingly.
    The flag BWFINSAF could be set to particular value depending on specific requirements (in seconds , but at least 3600 = 1 hour)  like 24 hours / 1 day = 24 * 3600 => 86400.With the new logic switched ON with flag BWFINEXT = X the other flags  BWFIOVERLA , BWFISAFETY , BWFITIMBOR are ignored and BWFILOWLIM , DELTIMEST will work as before.
    As per the instructions above the index-1 for the extraction in table BWFI_AEDAT would include the field AETIM which would enable the new logic to extract faster as AETIM is also considered as per the new logic. This could be removed if the standard logic is restored back.
    With the new extractor logic implemented you can change back to the standard logic any day by switching off the flag BWFINEXT to ' ' from 'X' and extract as it was before. But ensure that there is no extraction running (for any of the extractors 0FI_*_4 extractors/datasources) while switching.
    As with the earlier logic to restore back to the previous timestamp in BWOM2_TIMEST table to get the data from previous extraction LAST_TS could be set to the previous extraction timestamp when there are no current extractions running for that particular extractor or datasouce.
    With the frequency of the extraction increased (say 3 times a day) the volume of the data being extracted with each extraction would decrease and hence extractor would take lesser time.
    You should optimize the interval of time for the extractor runs by testing the best suitable intervals for optimal performance. We would not be able to give a definite suggestion on this, as it would vary from system to system and would depend on the data volume in the system, number of postings done everyday and other variable factors.
    To turn on the New Logic BWFINEXT has to be set to 'X' and reset back to ' ' when reverting back. This change has to be done only when there no extractions are running considering all the points above.
                  With the new minute based extraction logic switched ON,
    a) Ensure BWFI_AEDAT index-1 is enhanced with addition of AETIM and is active on the database.
    b) Ensure BWFINSAF is atleast 3600 ( 1 hour) in BWOM_SETTINGS
    c) Optimum value of DELTIMEST is maintained as needed (recommended/default value is 60 )
    d) A proper testing (functional, performance) is performed in standard test system and the results are all positive before moving the changes to the production system with the test system being an identical with the production system with settings and data.
    http://help.sap.com/saphelp_bw33/helpdata/en/af/16533bbb15b762e10000000a114084/frameset.htm

  • 0fi_gl_4 init is not pulling any data

    Hi All,
      We have ofi_gl_4 installed and running fine with deltas. We thought of reinitializing the delta because of some business reasons. I deleted the delta initialization and started delta initialization with data transfer with Fiscal Year/Period selection. It is not pulling any records and it is saying there is no new delta since last extraction. I'm sure there is data as this extractor pulled the data so far for this period.
      Then I thought of testing this using RSA3. Even RSA3 is not pulling any data. But there is data in BKPF/BSEG for this periods.
    The extractor is putting the correct time stamps in BWOM2_TIMEST and all other entries in tables like TPS31, BWOM_SETTINGS are all okay.
    Did any body faced this problem? Is there any change in the FI extractors recently?
    Your help is greatly appreciated.
    Thanks,
    Trinadha

    Hello Trinadha,
    You may want to have a look at OSS note 640633. Following is the text from note.
    Hope it helps.
    Regards,
    Praveen
    Symptom
    Deleting the Init 0FI_GL_4 deletes all BWFI_AEDAT data
    Other terms
    0fi_gl_4, 0fi_ar_4, 0fi_ap_4, 0fi_tx_4, BWFI_AEDAT, BWFI
    Reason and Prerequisites
    The system was intentionally programmed like this.
    Solution
    The error is corrected in the standard system with PI 2004.1.
    The reset logic is changed with this note so that during the reset, the DataSources 0FI_4 only delete their own entries in the initial selection table (ROOSPRMSF) and in the time stamp table (BWOM2TIMEST). The data of the table BWFI_AEDAT is, however, only deleted if no more 0FI__4 DataSources are active.
    Proceed as follows for the advance correction:
    1. Install the code in function module BWFIT_RESET_TIMESTAMPS or include LBWFITTOP.
    2. Create an executable program ZFCORR_BW_02 with transaction /NSE38.
    Program title:        Change 0FI_*_4 DataSources.
    Selection texts:      Name Text
                           OBJVERS Version
                            OLTPSOUR DataSource
    Implement the source code of program ZFCORR_BW_02.
    Execute the program for the DataSources 0FI_AR_4, 0FI_AP_4 and 0FI_TX_4. The program uses the BWFIT_RESET_TIMESTAMPS function module as a reset module for these DataSources.

  • OSS Notes Please

    Dear Gurus,
    Can anyone send me the following OSS notes if u have.  567747, 130253 and 417307.  Your kind help will be definitely awarded.
    My mail ID is [email protected]
    Best Regards
    Mohan Kumar
    Message was edited by: mohan kumar

    hi Mohan,
    i think you will need to have access to oss yourself,
    note 567747 is composite one that contains several notes.
    sent to you mail ...
    130253
    Symptom
    Uploading transaction data to BW takes too long
    Other terms
    Business Warehouse, data upload, batch upload, transaction data upload,
    performance, runtime, data load, SPLIT_PARTITION_FAILED ORA00054
    Reason and Prerequisites
    Loading data from a mySAP system (for example, R/3) or from a file takes a very long time.
    Solution
    The following tips are a general check list to make the mass data upload to the Business Warehouse (BW) System as efficient as possible.
    Tip 1:
    Check the parameter settings of the database as described in composite Note 567745.
    Check the basis parameter settings of the system.
    Note 192658 Setting basis parameters for BW Systems
    See the following composite notes:
    Note 567747 Composite note BW 3.x performance: Extraction & loading
    Note 567746 Composite note BW 3.x performance: Query & Web applications
    Tip 2:
    Import the latest BW Support Package and the latest kernel patch into your system.
    Tip 3:
    Before you upload the transaction data you should make sure that ALL relating master data has been loaded to your system. If no master has been loaded yet, the upload may take up to 100 percent longer because in this case, the system must retrieve master data IDs for the characteristic attributes and it must add new records to the master data tables.
    Tip 4:
    If possible, always use TRFC (PSA) as the transfer method instead of
    IDocs. If you (have to) use IDocs, keep the number of data IDocs as low
    as possible. We recommend an IDoc size of between 10000 (Informix) and 50000 (Oracle, MS SQL Server).
    To upload from a file, set this value in Transaction RSCUSTV6.
    To upload from an R/3 system, set this value in R/3 Customizing (SBIW -> General settings -> Control parameters for data transfer).
    Tip 5:
    If possible, load the data from a file on the application server and not from the client workstation as this reduces the network load. This also allows you to load in batch.
    Tip 6:
    If possible, use a fixed record length when you load data from a file (ASCII file). For a CSV file, the system only carries out the converison to a fixed record length during the loading process.
    Tip 7:
    When you load large data quantities from a file, we recommend that you split the file into several parts. We recommend using as many files of the same size as there are CPUs. You can then load these files simultaneously to the BW system in several requests. To do this, you require a fast RAID.
    Tip 8:
    When you load large quantities of data in InfoCubes, you should delete
    the secodary indexes before the loading process and then recreate them afterwards if the following applies: The number of the records that are loaded is big in comparison to the number of records that already exist in the (uncompressed) F fact table. For non-transactional InfoCubes, you must delete the indexes to be able to carry out parallel loading.
    Tip 9:
    When you load large quantities of data in an InfoCube, the number range buffer should be increased for the dimensions that are likely to have a high number of data sets.
    To do this, proceed as follows. Use function module RSD_CUBE_GET to find the object name of the dimension that is likely to have a high number of data sets.
    Function module settings:
    I_INFOCUBE = 'Infocube name'
    I_OBJVERS = 'A'
    I_BYPASS_BUFFER = 'X'
    The numbers for the dimensions are then contained in table 'E_T_DIME', column 'NUMBRANR'. If you enter 'BID' before this number, you get the relevant number range (for example BID0000053).
    You can use Transaction SNRO (-> ABAP/4 Workbench -> Development --> Other tools --> Number ranges) to display all number ranges for the dimensions used in BW if you enter BID*. You can use the object name that was determined beforehand to find the required number range.
    By double-clicking this line, you get to the number range maintenance. Choose Edit -> Set-up buffering -> Main memory, to define the 'No. of numbers in buffer'.
    Set this value to 500, for example. The size depends on the expected data quantity in the initial and in future (delta) uploads.
    !! Never buffer the number range for the package dimension !!
    Tip 10:
    When you load large quantities of data, you should increase the number
    range buffer for the info objects that are likely to have a high number of data sets. To do this, proceed as follows:
    Use function module RSD_IOBJ_GET to find the number range name of the info object that is likely to have a high number of data sets.
    Function module settings:
    I_IOBJNM = 'Info object name'
    I_OBJVERS = 'A'
    I_BYPASS_BUFFER = 'X'
    The number for the info object is in table 'E_S_VIOBJ', column 'NUMBRANR'. Enter 'BIM' in front of this number to get the required number range (for example BIM0000053).
    Use Transaction SNRO (-> ABAP/4 Workbench -> Development --> Other tools --> Number ranges) to display all number ranges used for the info objects in BW by entering BIM*. By entering the object name determined beforehand you can find the desired number range.
    By double-clicking this line you get to the number range object maintenance. Choose Edit -> Set-up buffering -> Main memory, to define the 'No. of numbers in buffer'.
    Set this value to 500, for example. The size depends on the expected data quantity in the initial and in future (delta) uploads.
    !! Never buffer the number range object for the characteristic 0REQUEST!!
    417307
    Symptom
    Performance load is too high/not justifiable during data load.
    Customizing settings via extractors IMG path (Transaction SBIW in the OLTP system; can be called directly in the OLTP or using Customizing, from the BW system) do not yield any considerable improvement or are not clear.
    The settings in Table ROIDOCPRMS or in the Scheduler in the BW system are not taken into account by some extractors. How is the data package size defined for the data transfer to the BW? Are there application-specific features to determine the package size? If so, what are they?
    Other terms
    SBIW, general settings extractors, MAXSIZE, data volume, OLTP, service API, data package size, package size, performance
    Reason and Prerequisites
    The general formula is:
          Package size = MAXSIZE * 1000 / size of the transfer structure,
                        but not more than MAXLINES.
    You can look up the transfer structure (extract structure) in Table ROOSOURCE in the active version of the DataSource and determine its size via SE11 (DDIC) -> Utilities -> Runtime object -> Table length.
    The system default values of 10,000 or 100,000 are valid for the MAXSIZE and MAXLINES parameters (see the F1 help for the corresponding fields in ROIDOCPRMS). You can use the IMG Transaction SBIW (in the OLTP system) "Maintain General settings for extractors" to overwrite these parameters in Table ROIDOCPRMS on a system-specific basis. You also have the option of overriding these values in the scheduler (in the target BW system). However, in the Scheduler (InfoPackage) you can only reduce the MAXSIZE. The advantage of using the Scheduler to carry out maintenance is that the values are InfoSource-specific.
    However, some extractors have their own flow logic which MAXSIZE does not load 1:1 from the ROIDOCPRMS.
    This Note does not cover all SAP applications.
    Solution
    Application/DataSource               Standard settings or note
    Generic extractor                     Standard (Example: Note 409641)
    Delta Extraction via DeltaQueue       Standard as of PlugIn 2000.2
                                               Patch 3
    LO-LIS                                 Standard
    Logistic Cockpit SD                       Notes 419465 and 423118
    Logistic Cockpit MM-IM:
    Extraction 2LIS_03_UM                     Notes 537235 and 585750
    Extraction 2LIS_03_BF                     Note 454267
               In general, the following applies to Logistic Cockpit Extraction: The package size set only serves as guideline value.
                Depending on the application, contents and structure of the documents, and the selection in the reconstruction program, the actual size of the transfer packages may differ considerably. That is not an error.
                For 'Queued Delta' update mode the package size of 9999 LUWs (=Logical Unit of Work, in this particular case documents or posting units per transaction are concerned) is set for LO Cockpit. However, if a transaction updated more than 10000 documents at once, the number 9999 would be invalid. This is because an update process cannot be split.
                In the case of a 'Direct Delta' update mode there is no package size for LUW bundling. In the DeltaQueue (RSA1) every LUW is updated individually.
               Also note the following:
                         If you want to transfer large data sets into the BW System, it is a good idea to carry out the statistical data setup and the subsequent data transfer in several sub-steps. In doing so, the selections for the statistical data setup and in the BW InfoPackage must correspond to each other. For performance reasons, we recommend using as few selection criteria as possible. You should avoid complex selections. After loading init delta with selection 1, the setup table has to be deleted and rebuilt with selection 2.
                         Bear the following in mind: The delta is loaded for the sum of all selections from the init deltas. Remember that the selections are made so that they do not impede the delta load, for example, if you have initialized the delta for the periods January 1st, 1999 to December 1st, 2000 and December 2nd, 2000 to August 1st, 2001 then you get a time interval from January 1st, 1999 to August 1st, 2001 in the BW. Documents from August 2nd, 2001 are no longer loaded.
    CO-OM                                     Note 413992
    PS                                        Standard or according
                                               to Note 413992
    CO-PC                                     Standard
    FI (0FI-AR/AP-3)                          First tables with open items(BSID for customers, BSIK for vendors) are read. The package size from the MAXISE field in Table ROIDOCPRMS is used in the extraction from these tables. If the packages are grouped together according to ROIDOCPRMS and a few data records still remain, these remaining records are transferred in one additional package.
    After the open items are extracted, the system extracts from the table of cleared items (BSAD for customers, BSAK for vendors). In this extraction,the package size from the MAXSIZE field is adhered to.
    FI (0FI-/AR/AP/-4 , new as of BW 30A) Like FI-AR/AP-3, but with one difference: if there are remaining records after the system reads the open items using the setting in the MAXSIZE field, they are not transferred in one extra package, but added to the first package of items read from the table of cleared items. For example, if 10 data packages with 15000 data records are extracted from Table BSID in accordance with ROIDOCPRMS, and 400 data records remain, the package size of the first data package from Table BSAD is 15400.
    Both new and changed records are formatted in the following sequence in the delta transfer: 1) new BSIK/BSID records; 2) new BSAK/BSAD records; 3) changed BSIK/BSID records; 4) changed BSAK/BSAD records.
    Package size 0FI_GL_4:
    Prior to Note 534608 and the related notes, package size could vary considerably since the MAXLINES were applied to the document headers only. Then all documents lines for the document headers were read and transferred. As a result, the packages were 2 to 999 times as large as MAXLINES depending on the number of line items per document.
    Note 534608 and the related notes changed the logic so that the MAXLINES is now also applied to the document lines. For each package, MAXLINES can be exceeded by up to 998 lines since a document is always transferred completely in one package. Smaller 'remaining packages' may also occur; for example if MAXLINES = 10000, and 10000 document headers with 21000 lines are selected, 2x10000 and the remainder of 1000 were were transferred in a separate package. Selection logic in 0FI_GL_4: Selection of the new FI documents via CPUDT -> the changed documents are then selected via Table BWFI_AEDAT. When changing the selection from new to changed documents, a package may occur which consists of the 'remainder' of the CPUDT selection and the first package of the BWFI_AEDAT selection. This package can then have a maximum size of 2 x MAXLINES.
    FI-FM                                  Note 416669
    EC-PCA                                 For the most part, the systemadheres to the standard settings but for technical reasons, packages packages smaller than the MAXSIZE or 7200 larger than the MAXSIZE may be omitted in Table ROIDOCPRMS
    FI-SL                                  as in EC-PCA
    PT                                     Standard (refer to Note
                                           397209)
    PY                                     Standard
    PA                                     Standard
    RE                                     Standard
    ISR-CAM (Category Management,
             new as of PlugIn 2001.1)     Standard
    CO-PA                                  During the initialization and fullupdate in the profitability analysis, a join is always read from two tables (For details see Note 392635). To avoid terminations caused by Select statements that run for too long, access occurs with intervals for the object numbers (fixed size 10,000). New intervals are read until the package size requested by BW is reached. Therefore the size of the data package is always equal to or larger than the specification, but it can vary considerably.
    Master data:
    Business partner                       Standard
    Product                                Standard
    Customer                               Standard
    Vendor                                 Standard
    Plant                                  Standard
    Material                               Standard
    567747
    Symptom
    You want to improve the performance of the extraction and loading of your data into SAP BW 3.x.
    Solution
    This is a composite note that deals with performance-relevant topics in the area of extraction and loading.
    If you encounter performance problems, ensure that the current Support Package has been imported.
    This note is continually updated. You should therefore download a new version on a regular basis.
    You will find further documents in the SAP Service Marketplace, alias bw under the folder "Performance".
    Contents:
    I.    Extraction from the OLTP
    II.   Loading generally
    III.  Master data
    IV.   Roll-Up/aggregate structure
    V.    Compression
    VI.   Hierarchies/attribute realignment run
    VII.  DataMart interface
    VIII. ODS objects
    IX.   Miscellaneous
    I.    Extraction from the OLTP
    Note 417307: extractor packet size: Collective note for applications
    Note 505700: LBWE: New update methods from PI 2002.1
    Note 398041: INFO: CO-OM/IM IM content (BW)
    Note 190038: Composite note performance for InfoSource 0CO_PC_01 and
    Note 436393: Performance improvement for filling the setup tables
    Note 387964: CO delta extractors: poor performance for Deltainit
    II.   Loading generally
    Note 130253: Notes on uploading transaction data into BW
    Note 555030: Deactivating BW-initiated DB statistics
    Note 620361: Performance data loading/Admin. data target, many requests
    III.  Master data
    Note 536223: Activating master data with navigation attributes
    Note 421419: Parallel loading of master data (several requests)
    IV.   Roll-Up/aggregate structure
    Note 484536: Filling aggregates of large InfoCubes
    Note 582529: Rollup of aggregates & indexes (again as of BW 3.0B Support Package 9)
    V.    Compression
    Note 375132: Performance optimization for InfoCube condensation
    Note 583202: Change run and condensing
    VI.   Hierarchies/attribute realignment run
    Note 388069: Monitor for the change run
    Note 176606: Apply Hierarchy/Attribute change ... long runtime
    Note 534630: Parallel processing of the change run
    Note 583202: Change run and condensing
    VII.  DataMart interface
    Note 514907: Processing complex queries (DataMart, and so on)
    Note 561961: Switching the use of the fact table view on/off
    VIII. ODS objects
    Note 565725: Optimizing the performance of ODS objects in BW 3.0B
    IX.   Miscellaneous
    Note 493980: Reorganizing master data attributes and master data texts
    Note 729402: Performance when compiling and searching in the AWB

  • BWFI_AEDAT table not getting updated - FI_GL_4 Delta

    Hi Friends,
    We have just started loading Delta loads using FI_GL_4.
    The Changed records are getting updated correctly for all the documents into BWFI_AEDAT and loaded into BW except in cases of an Intercompany doc.
    Is there a particular reason that changes to Intercompany Doc will not get updated in the table or is there a way to find out how this table gets populated so that I can debug this issue.
    Any help is highly appreciated..
    Ashish.

    Hi,
    How many days after INIT are you going for delta??
    If there is entry in BWFI_AEDAT there is no reason why delta should not come.
    check for newly created documents in BKPF using CPUDT to see if new documents have been created in this period too.
    Regards,
    Saurabh

  • MB5B Report table for Open and Closing stock on date wise

    Hi Frds,
    I am trying get values of Open and Closing stock on date wise form the Table MARD and MBEW -Material Valuation but it does not match with MB5B reports,
    Could anyone suggest correct table to fetch the values Open and Closing stock on date wise for MB5B reports.
    Thanks
    Mohan M

    Hi,
    Please check the below links...
    Query for Opening And  Closing Stock
    Inventory Opening and Closing Stock
    open stock and closing stock
    Kuber

Maybe you are looking for

  • Recommended way to end a parse ?

    In cases where a XML stream is being read from a file, it is fairly simple to have the parse end when the end of the file has been detected. When parsers are used with other streams like those from a servlet request, it is not as clear when the strea

  • Can I use existing games in new Games Centre

    Just updated my 3gs to OS4.1 with no problems, my question is:- Can I use existing games already bought & paid for at the iStore in the new Games Centre? Iv'e tried dragging them in to no avail i'm begining to think in that money making fashion of Ap

  • Values not passing from form to context attributes (value attributes)

    Hi Experts, I have bound the properties of theadobe interactive form (WD-Java) to the context attributes but the values are not passing to the value attributes at runtime. For your information I have not bound the pdfsource binary attribute of the vi

  • Running scripts from the catalog

    If you run a backup script from the catalog and you lose your connection, does the script continue to run (like if it were run with nohup) or does it fail?

  • Query regarding patches - are they cumulative?

    We are in process of upgrading the BI Servers (BIS) and Client Machines (WAS) from BI XI R4.1 SP04+Patch1 To Pach4. Downloaded Patch4 from SAP Portal is having 3 components (Server, Client, Dashboard) whether all these 3 components are cumulative of