Initial Load/ Full Load

Is there a Diffference Between Initial Load and a Full Load?

Hello Xcaliber,
There is a difference if you put it in the context of delta loads.
The initial load will do a "full load" on some selection and after that, delta loads can take place.
A full load will do a full load but you can not perform delta load (except if you are doing a pseudo-delta) after that.
Hope this helps.

Similar Messages

  • Error while running loading full load

    Hi all,
    I'm getting below error while running the mappings in DAC
    INFO : LM_36488 [Fri Sep 11 02:47:04 2009] : (580|4952) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [VAR_27028 Use override value [4,0] for mapping parameter:[MPLT_SA_ORA_CODES.$$DATASOURCE_NUM_ID].]
    Regards
    Rama

    Hi all,
    Detail Below Error,
    INFO : LM_36435 [Fri Sep 11 03:11:49 2009] : (580|4952) Starting execution of workflow [SDE_ORA_CodeDimension_CostCenter] in folder [SDE_ORA11510_Adaptor] last saved by user [Administrator].
    INFO : LM_44195 [Fri Sep 11 03:11:49 2009] : (580|4952) Workflow [SDE_ORA_CodeDimension_CostCenter] service level [SLPriority:5,SLDispatchWaitTime:1800].
    INFO : LM_36330 [Fri Sep 11 03:11:49 2009] : (580|4952) Start task instance [Start]: Execution started.
    INFO : LM_36318 [Fri Sep 11 03:11:49 2009] : (580|4952) Start task instance [Start]: Execution succeeded.
    INFO : LM_36505 : (580|4952) Link [Start --> SDE_ORA_CodeDimension_CostCenter]: empty expression string, evaluated to TRUE.
    INFO : LM_36388 [Fri Sep 11 03:11:49 2009] : (580|4952) Session task instance [SDE_ORA_CodeDimension_CostCenter] is waiting to be started.
    INFO : LM_36682 [Fri Sep 11 03:11:49 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter]: started a process with pid [5644] on node [node01_obidevapp1].
    INFO : LM_36330 [Fri Sep 11 03:11:49 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter]: Execution started.
    INFO : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [TM_6793 Fetching initialization properties from the Integration Service. : (Fri Sep 11 03:11:49 2009)]
    INFO : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [DISP_20305 The [Preparer] DTM with process id [5644] is running on node [node01_obidevapp1].
    : (Fri Sep 11 03:11:49 2009)]
    INFO : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [PETL_24036 Beginning the prepare phase for the session.]
    INFO : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [TM_6721 Started [Connect to Repository].]
    INFO : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [TM_6722 Finished [Connect to Repository]. It took [0.312494] seconds.]
    INFO : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [TM_6794 Connected to repository [Oracle_BI_DW_Base] in domain [Domain_obidevapp1] user [Administrator]]
    INFO : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [TM_6721 Started [Fetch Session from Repository].]
    INFO : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [TM_6722 Finished [Fetch Session from Repository]. It took [0.406242] seconds.]
    INFO : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [VAR_27028 Use override value [(2,4,5,9,0)] for mapping parameter:[MPLT_SA_ORA_CODES.$$DATASOURCE_NUM_ID].]
    ERROR : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [CMN_1761 Timestamp Event: [Fri Sep 11 03:11:50 2009]]
    ERROR : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [VAR_27056 Data conversion error in converting [(2,4,5,9,0)].]
    ERROR : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [CMN_1761 Timestamp Event: [Fri Sep 11 03:11:50 2009]]
    ERROR : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [VAR_27054 Error in assigning initial data value to mapping parameter:[MPLT_SA_ORA_CODES.$$DATASOURCE_NUM_ID].]
    ERROR : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [CMN_1761 Timestamp Event: [Fri Sep 11 03:11:50 2009]]
    ERROR : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [TM_6270 Error: Variable parameter expansion error.]
    ERROR : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [CMN_1761 Timestamp Event: [Fri Sep 11 03:11:50 2009]]
    ERROR : LM_36488 [Fri Sep 11 03:11:50 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] : [TM_6163 Error initializing variables and parameters for the partition.]
    ERROR : LM_36320 [Fri Sep 11 03:11:52 2009] : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter]: Execution failed.
    WARNING : LM_36331 : (580|4972) Session task instance [SDE_ORA_CodeDimension_CostCenter] failed and its "fail parent if this task fails" setting is turned on. So, Workflow [SDE_ORA_CodeDimension_CostCenter] will be failed.
    ERROR : LM_36320 [Fri Sep 11 03:11:52 2009] : (580|4972) Workflow [SDE_ORA_CodeDimension_CostCenter]: Execution failed.
    Regards,
    Rama

  • Error in Transaction Data - Full Load

    Hello All,
        This is the current scenario that I am working on:
    There is a process chain which has two transaction data load (FULL LOADS) processes to the same cube.In the process monitor everything seems okay (data loads seem fine) but overall status for both loads failed due to 'Error in source system/extractor' and it says 'error in data selection'.
    Processing is set to data targets only.
    On doing a manage on the cube, I found 3 old requests that were red and NOT set to QM status red. So I set them to QM status red and Deleted them and the difference I saw was that the subsequent requests became available for Reporting.
    Now this data load which is a full load takes for ever - I dont even know why I do not see a initialize delta update option there - can Anyone tell me why I dont see that.
    And, coming to the main question, how do I get the process chain completed - will I have to repeat the data loads or what options do I have to have a succesfully running process chain or at least these 2 full loads of transaction data.
    Thank you - points will be assigned for helpful answers
    - DB
    Edited by: Darshana on Jun 6, 2008 12:01 AM
    Edited by: Darshana on Jun 6, 2008 12:05 AM

    One interesting discovery I just found in R/3, was this job log with respect to the above process chain:
    it says that the job was cancelled in R/3 because the material ledger currencies were changed.
    the process chain is for inventory management and the data load process that get cancelled are for  the job gets cancelled in the source system:
    1. Material Valuation: period ending inventories
    2. Material Valuation: prices
    The performance assistant says this but I am not sure how far can I work on the R/3 side to rectify this:
    Material ledger currencies were changed
    Diagnosis
    The currencies currently set for the material ledger and the currency types set for valuation area 6205 differ from those set at conversion of the data (production startup).
    System Response
    The system does not allow you to post transactions after changing the currency settings to ensure consistency.
    Procedure
    Replace the current settings with the those entered at production
    start-up.
    If you wish to change the currency settings, you must use programs to convert data from the old to the new currencies.
    Inform your system administrator
    Anyone knowledgable in this area please give your inputs.
    - DB

  • Full load - Every month data loading without deleting previous data

    Dear SDN,
    I am loading (Full load) Data from 001.2005 to 010.2007 using 0FI_GL_1 with the selection option Fiscal Year/Period (From : 001.2005 and To : 010.2007) ...
    From next month (011.2007) onwards, when loading data everytime it deletes entire data and again loading (Since i have kept an option in Infopackage -- Datatargets Tab -- Delete entire content of datatarget)...
    My doubt is wheather without deleting the whole data, can we load from next month data onwards...so, if that is the case shall i use Fiscal year/Period range ?? (or) should not use Fiscal year/Period range...
    But the client needs data only from 001.2005 onwards not beyond that
    Please help me to resolve this issue..
    Help will be greatly appreciated with points...
    Thanks....

    Hello Venkat,
    Unfortunately you can not do that way, because in period 11.2007 (assume this is the period that you want to newly load to cube.) there might be posting in previous period. And these postings made to previous period should also reflect to cube. Only way is delete and reload by using full.
    Other option you might use a ODS before sending to cube. you might every time load full data, ODS key fields might be checked by system and neccesary inserts and updates will be done by system, so there wont be duplications. And finally only new data will be sent to cube as delta.
    Sarhan.

  • Process chain for full load.

    Hi,
    I have developed a Process chain for delta loads. Now my question is, can we use process chain to load full load data source for master data, if yeas then what are the steps.
    If we can use process chain for full load to master data then can we use delta and full load infopackages in the process chain.
    Please update.

    Hi Sata,
      You can Include both Full and Delta in the Process Chain. But make sure that you execute the Init InfoPackage manually before executing the delta InfoPackage in the Process Chain.
    For Full Load Process chain, add the infopackage
    >DTP for Full Load---->delete PSA data variant(after 2days) -
    >attribute change run
    Normally load the hierarchy first then the attribute and text.
    Assign points if it helped you.
    Regards,
    Senoy

  • DTP Delta and Full Load...

    Dear All,
    I'm working with SAP BW 7.3 and I have a standard data flow, starting with DataSource, DSO and InfoCube. My process chain faced an error and for last week or so I could not fetch deltas. To remove the error I had deleted all records from InfoCube, since I had all requests available into DSO. Then I manually loaded Full Load from DSO to InfuCube. But next day when my process chain executed it brought all precious and new records again into InfoCube, as the delta was properly fetched into DSO but it did not come into InfoCube, as it was coming before. When I checked Active and Change Log tables of DSO these two had same number of records.
    1. What could be the reason that delta DTP between DSO and InfoCube is not fetching only delta records, it was fetching deltas records before full load?
    2. Have I made a mistake or missed any setting in DTP while executing full Load DTP between DSO and InfoCube?
    3. What are these options for in Extraction Tab of DTP and when we use following options and why:
        i.   Active Table (with Archive)
        ii.  Acitve Table (Without)
        iii. Archive (Full Extraction only)
        iv. Change Log
    I will appreciate your reply.
    Many thanks!
    Tariq Ashraf

    Hi Tariq,
    Please check the below:
    Delta Init. Extraction from
    Active Table (with Archive)
    The data is read from the DSO active table and from the archived data.
    Active Table (Without Archive)
    The data is only read from the active table of a DSO. If there is data in the archive or in near-line storage at the time of extraction, this data is not extracted.
    Archive (Full Extraction Only)
    The data is only read from the archive data store. Data is not extracted from the active table.
    Change Log
    the data is read from the change log and not the active table of the DSO.
    Hope this answers your question. Let me know if any required.
    Regarsd
    Ramesh V

  • BW Delta and Full load

    Hi !
    This is a simple question with BW 3.5 :
    A standard extractor with Delta is extracting everyday delta to 1st level ODS -> 2nd level ODS -> Cube.
    Now i want to add 2 ODSes in same flow and they should also catch all these delta. Along with that I want to load for remaining period of this month (ex. Full load from 1st to today of the month and then onwards delta)
    So new structure would be
    0CRM_SALES_ACT_1 -> Level 1 existing ODS delta -> Level 2 existing ODS Delta
                                      -> New level 1 ODS with delta -> New Level 2 ODS with delta
    Can you please let me know how to do that ?
    Full , delta info package ?
    (FYI - i loaded full load for history and then tried delta, but delta failed saying Full load is already there in system)
    Please do not post duplicate threads
    Edited by: Pravender on Apr 12, 2011 7:08 PM

    Partik,
    My suggestion is load data from exiting Level 1 DSO to 2 New DSO's instead of loading directly from source to New DSO's.
    To do that... removed initialization from Level1 DSO to 2nd level DSO and Re-Initialize WITHOUT data transfer by considering new DSO's also and continue delta's.
    Use full load infopackages to load historic data from Level 1 DSO to 2 new DSO's.
    Flow now should be:
    0CRM_SALES_ACT_1 -> Level 1 existing ODS delta -> Level 2 existing ODS Delta
    Level 1 existing ODS delta -> New DSO's
    If FULL loads already there in DSO's further delta's wont work. Convert FULL loads to repair full load using Program: RSSM_SET_REPAIR_FULL_FLAG.
    Then continue delta's.
    Hope it Helps
    Srini

  • Initial full load of Master data using process chain

    Hi All,
    Could you please help me regarding, initial master data load to characteristics with attributes and text. I need to load master data to 23 info objects, by using process chain can I do full load of master data to all info objects at a time. And one more doubt is, as per my knowledge we can't maintain more than one variant in an info package, is that right ? or we can ?
    Means Start Variant -> Info Package (0Customer_Text, 0Customer_Attr,0BILL_TYPE_TEXT, BILL_CAT_TEXT) -> DTP ( ", ", ", ") -> ACR.
    Your Help will be appreciated.
    Thanks & Regards
    Sunil

    Hi,
    "I need to load master data to 23 info objects, by using process chain can I do full load of master data to all info objects at a time."
    if there is no dependency between attributes then you add you can create process chains and trigger them at a time. No issues.
    we can't maintain more than one variant in an info package, is that right ? or we can ?
    With one info pack you can't load data to all 23 psa. because each data source have own psa. you need to sue 23 info packs.
    in general start variant--> info pack --> dtp (assuming as your bw 7.x)---> attribute change run.
    like that you need to create 23 chains 
    Or create one two big chains.
    one is for attribute and another for text.
    In attribute
    start varaint--> info pack(info bject 1)--DTP(infoobject 1))--> info pack(infoo bject 2)-->dtp(infoobject 2).
    Like that way you can create in series and parallel chains to load attributes data into info objects. at end you add change run for 6 info objects each. SAme you can do for text loads also.
    Thanks

  • Project Analytics 7.9.6.1 - Error while running a full load

    Hi All,
    I am performing a full load for Projects Analytics and get the following error,
    =====================================
    ERROR OUTPUT
    =====================================
    1103 SEVERE Wed Nov 18 02:49:36 WST 2009 Could not attach to workflow because of errorCode 36331 For workflow SDE_ORA_CodeDimension_Gl_Account
    1104 SEVERE Wed Nov 18 02:49:36 WST 2009
    ANOMALY INFO::: Error while executing : INFORMATICA TASK:SDE_ORA11510_Adaptor:SDE_ORA_CodeDimension_Gl_Account:(Source : FULL Target : FULL)
    MESSAGE:::
    Irrecoverable Error
    Error while contacting Informatica server for getting workflow status for SDE_ORA_CodeDimension_Gl_Account
    Error Code = 36331:Unknown reason for error code 36331
    Pmcmd output :
    Session log initialises NULL value to mapping parameter MPLT_ADI_CODES.$$CATEGORY. This is then used insupsequent SQL and results in ORA-00936: missing expression error following are the initialization section and the load section containing the error in the log
    Initialisation
    DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
    DIRECTOR> VAR_27028 Use override value [ORA_11_5_10] for session parameter:[$DBConnection_OLTP].
    DIRECTOR> VAR_27028 Use override value [ORA_11_5_10.DATAWAREHOUSE.SDE_ORA11510_Adaptor.SDE_ORA_CodeDimension_Gl_Account_Segments.log] for session parameter:[$PMSessionLogFile].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[MPLT_ADI_CODES.$$CATEGORY].
    DIRECTOR> VAR_27028 Use override value [4] for mapping parameter:[MPLT_SA_ORA_CODES.$$DATASOURCE_NUM_ID].
    DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[MPLT_SA_ORA_CODES.$$TENANT_ID].
    DIRECTOR> TM_6014 Initializing session [SDE_ORA_CodeDimension_Gl_Account_Segments] at [Wed Nov 18 02:49:11 2009].
    DIRECTOR> TM_6683 Repository Name: [repo_service]
    DIRECTOR> TM_6684 Server Name: [int_service]
    DIRECTOR> TM_6686 Folder: [SDE_ORA11510_Adaptor]
    DIRECTOR> TM_6685 Workflow: [SDE_ORA_CodeDimension_Gl_Account_Segments] Run Instance Name: [] Run Id: [17]
    DIRECTOR> TM_6101 Mapping name: SDE_ORA_CodeDimension_GL_Account_Segments [version 1].
    DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
    DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
    DIRECTOR> TM_6827 [C:\Informatica\PowerCenter8.6.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_CodeDimension_Gl_Account_Segments].
    DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR> TM_6703 Session [SDE_ORA_CodeDimension_Gl_Account_Segments] is run by 32-bit Integration Service [node01_ASG596138], version [8.6.1], build [1218].
    MANAGER> PETL_24058 Running Partition Group [1].
    MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
    MANAGER> PETL_24001 Parallel Pipeline Engine running.
    MANAGER> PETL_24003 Initializing session run.
    MAPPING> CMN_1569 Server Mode: [UNICODE]
    MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6151 The session sort order is [Binary].
    MAPPING> TM_6185 Warning. Code page validation is disabled in this session.
    MAPPING> TM_6156 Using low precision processing.
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6307 DTM error log disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> DBG_21075 Connecting to database [orcl], user [DAC_REP]
    MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_Master_Map] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_Master_Code] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_W_CODE_D] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_CodeDimension_Gl_Account_Segments]
    DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
    MANAGER> PETL_24004 Starting pre-session tasks. : (Wed Nov 18 02:49:14 2009)
    MANAGER> PETL_24027 Pre-session task completed successfully. : (Wed Nov 18 02:49:14 2009)
    DIRECTOR> PETL_24006 Starting data movement.
    MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 128000 bytes.
    READER_1_1_1> DBG_21438 Reader: Source is [asgdev], user [APPS]
    READER_1_1_1> BLKR_16051 Source database connection [ORA_11_5_10] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    READER_1_1_1> BLKR_16003 Initialization completed successfully.
    WRITER_1_*_1> WRT_8147 Writer: Target is database [orcl], user [DAC_REP], bulk mode [OFF]
    WRITER_1_*_1> WRT_8221 Target database connection [DataWarehouse] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL INSERT statement:
    INSERT INTO W_CODE_D(DATASOURCE_NUM_ID,SOURCE_CODE,SOURCE_CODE_1,SOURCE_CODE_2,SOURCE_CODE_3,SOURCE_NAME_1,SOURCE_NAME_2,CATEGORY,LANGUAGE_CODE,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,W_UPDATE_DT,TENANT_ID) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL UPDATE statement:
    UPDATE W_CODE_D SET SOURCE_CODE_1 = ?, SOURCE_CODE_2 = ?, SOURCE_CODE_3 = ?, SOURCE_NAME_1 = ?, SOURCE_NAME_2 = ?, MASTER_DATASOURCE_NUM_ID = ?, MASTER_CODE = ?, MASTER_VALUE = ?, W_INSERT_DT = ?, W_UPDATE_DT = ?, TENANT_ID = ? WHERE DATASOURCE_NUM_ID = ? AND SOURCE_CODE = ? AND CATEGORY = ? AND LANGUAGE_CODE = ?
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL DELETE statement:
    DELETE FROM W_CODE_D WHERE DATASOURCE_NUM_ID = ? AND SOURCE_CODE = ? AND CATEGORY = ? AND LANGUAGE_CODE = ?
    WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_CODE_D]
    WRITER_1_*_1> WRT_8003 Writer initialization complete.
    READER_1_1_1> BLKR_16007 Reader run started.
    WRITER_1_*_1> WRT_8005 Writer run started.
    WRITER_1_*_1> WRT_8158
    Load section
    *****START LOAD SESSION*****
    Load Start Time: Wed Nov 18 02:49:16 2009
    Target tables:
    W_CODE_D
    READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_Codes_GL_Account_Segments.Sq_Fnd_Flex_Values] User specified SQL Query [SELECT
    FND_FLEX_VALUES.FLEX_VALUE_SET_ID,
    FND_FLEX_VALUES.FLEX_VALUE,
    MAX(FND_FLEX_VALUES_TL.DESCRIPTION),
    FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM,
    FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME
    FROM
    FND_FLEX_VALUES,
    FND_FLEX_VALUES_TL,
    FND_ID_FLEX_SEGMENTS,
    FND_SEGMENT_ATTRIBUTE_VALUES
    WHERE
    FND_FLEX_VALUES.FLEX_VALUE_ID = FND_FLEX_VALUES_TL.FLEX_VALUE_ID AND FND_FLEX_VALUES_TL.LANGUAGE ='US' AND
    FND_ID_FLEX_SEGMENTS.FLEX_VALUE_SET_ID =FND_FLEX_VALUES.FLEX_VALUE_SET_ID AND
    FND_ID_FLEX_SEGMENTS.APPLICATION_ID = 101 AND
    FND_ID_FLEX_SEGMENTS.ID_FLEX_CODE ='GL#' AND
    FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM =FND_SEGMENT_ATTRIBUTE_VALUES.ID_FLEX_NUM AND
    FND_SEGMENT_ATTRIBUTE_VALUES.APPLICATION_ID =101 AND
    FND_SEGMENT_ATTRIBUTE_VALUES.ID_FLEX_CODE = 'GL#' AND
    FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME=FND_SEGMENT_ATTRIBUTE_VALUES.APPLICATION_COLUMN_NAME AND
    FND_SEGMENT_ATTRIBUTE_VALUES.ATTRIBUTE_VALUE ='Y'
    GROUP BY
    FND_FLEX_VALUES.FLEX_VALUE_SET_ID,
    FND_FLEX_VALUES.FLEX_VALUE,
    FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM,
    FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME]
    READER_1_1_1> RR_4049 SQL Query issued to database : (Wed Nov 18 02:49:17 2009)
    READER_1_1_1> RR_4050 First row returned from database to reader : (Wed Nov 18 02:49:17 2009)
    LKPDP_3> DBG_21312 Lookup Transformation [mplt_ADI_Codes.Lkp_W_CODE_D]: Lookup override sql to create cache: SELECT W_CODE_D.SOURCE_NAME_1 AS SOURCE_NAME_1, W_CODE_D.SOURCE_NAME_2 AS SOURCE_NAME_2, W_CODE_D.MASTER_DATASOURCE_NUM_ID AS MASTER_DATASOURCE_NUM_ID, W_CODE_D.MASTER_CODE AS MASTER_CODE, W_CODE_D.MASTER_VALUE AS MASTER_VALUE, W_CODE_D.W_INSERT_DT AS W_INSERT_DT, W_CODE_D.TENANT_ID AS TENANT_ID, W_CODE_D.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, W_CODE_D.SOURCE_CODE AS SOURCE_CODE, W_CODE_D.CATEGORY AS CATEGORY, W_CODE_D.LANGUAGE_CODE AS LANGUAGE_CODE FROM W_CODE_D
    WHERE
    W_CODE_D.CATEGORY IN () ORDER BY DATASOURCE_NUM_ID,SOURCE_CODE,CATEGORY,LANGUAGE_CODE,SOURCE_NAME_1,SOURCE_NAME_2,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,TENANT_ID
    LKPDP_3> TE_7212 Increasing [Index Cache] size for transformation [mplt_ADI_Codes.Lkp_W_CODE_D] from [1000000] to [4734976].
    LKPDP_3> TE_7212 Increasing [Data Cache] size for transformation [mplt_ADI_Codes.Lkp_W_CODE_D] from [2000000] to [2007040].
    READER_1_1_1> BLKR_16019 Read [625] rows, read [0] error rows for source table [FND_ID_FLEX_SEGMENTS] instance name [mplt_BC_ORA_Codes_GL_Account_Segments.FND_ID_FLEX_SEGMENTS]
    READER_1_1_1> BLKR_16008 Reader run completed.
    LKPDP_3> TM_6660 Total Buffer Pool size is 609824 bytes and Block size is 65536 bytes.
    LKPDP_3:READER_1_1> DBG_21438 Reader: Source is [orcl], user [DAC_REP]
    LKPDP_3:READER_1_1> BLKR_16051 Source database connection [DataWarehouse] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    LKPDP_3:READER_1_1> BLKR_16003 Initialization completed successfully.
    LKPDP_3:READER_1_1> BLKR_16007 Reader run started.
    LKPDP_3:READER_1_1> RR_4049 SQL Query issued to database : (Wed Nov 18 02:49:18 2009)
    LKPDP_3:READER_1_1> CMN_1761 Timestamp Event: [Wed Nov 18 02:49:18 2009]
    LKPDP_3:READER_1_1> RR_4035 SQL Error [
    ORA-00936: missing expression
    Could you please suggest what the issue might be and how it can be fixed?
    Many thanks,
    Kiran

    I have continued related detains in the following thread,
    Mapping Parameter  $$CATEGORY not included in the parameter file (7.9.6.1)
    Apologies for the inconvenience.
    Thanks,
    Kiran

  • Task fails while running Full load ETL

    Hi All,
    I am running full load ETL For Oracle R12(vanila Instance) HR But 4 tasks are failing SDE_ORA_JobDimention, SDE_ORA_HRPositionDimention, SDE_ORA_CodeDimension_Pay_level and SDE_ORA_CodeDimensionJob, I changed the parameter for all these task as mentioned in the Installation guide and rebuilled. Please help me out.
    Log is like this for SDE_ORA_JobDimention
    DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
    DIRECTOR> VAR_27028 Use override value [ORA_R12] for session parameter:[$DBConnection_OLTP].
    DIRECTOR> VAR_27028 Use override value [9] for mapping parameter:[$$DATASOURCE_NUM_ID].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$JOBCODE_FLXFLD_SEGMENT_COL].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$JOBFAMILYCODE_FLXFLD_SEGMENT_COL].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$LAST_EXTRACT_DATE].
    DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[$$TENANT_ID].
    DIRECTOR> TM_6014 Initializing session [SDE_ORA_JobDimension_Full] at [Fri Sep 26 10:52:05 2008]
    DIRECTOR> TM_6683 Repository Name: [Oracle_BI_DW_Base]
    DIRECTOR> TM_6684 Server Name: [Oracle_BI_DW_Base_Integration_Service]
    DIRECTOR> TM_6686 Folder: [SDE_ORAR12_Adaptor]
    DIRECTOR> TM_6685 Workflow: [SDE_ORA_JobDimension_Full]
    DIRECTOR> TM_6101 Mapping name: SDE_ORA_JobDimension [version 1]
    DIRECTOR> TM_6827 [C:\Informatica\PowerCenter8.1.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_JobDimension_Full].
    DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
    DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR> TM_6703 Session [SDE_ORA_JobDimension_Full] is run by 32-bit Integration Service [node01_HSCHBSCGN20031], version [8.1.1], build [0831].
    MANAGER> PETL_24058 Running Partition Group [1].
    MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
    MANAGER> PETL_24001 Parallel Pipeline Engine running.
    MANAGER> PETL_24003 Initializing session run.
    MAPPING> CMN_1569 Server Mode: [ASCII]
    MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6151 Session Sort Order: [Binary]
    MAPPING> TM_6156 Using LOW precision decimal arithmetic
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6307 DTM Error Log Disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_JobDimension_Full]
    DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
    MANAGER> PETL_24004 Starting pre-session tasks. : (Fri Sep 26 10:52:13 2008)
    MANAGER> PETL_24027 Pre-session task completed successfully. : (Fri Sep 26 10:52:14 2008)
    DIRECTOR> PETL_24006 Starting data movement.
    MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 1280000 bytes.
    READER_1_1_1> DBG_21438 Reader: Source is [dev], user [apps]
    READER_1_1_1> BLKR_16003 Initialization completed successfully.
    WRITER_1_*_1> WRT_8146 Writer: Target is database [orcl], user [obia], bulk mode [ON]
    WRITER_1_*_1> WRT_8106 Warning! Bulk Mode session - recovery is not guaranteed.
    WRITER_1_*_1> WRT_8124 Target Table W_JOB_DS :SQL INSERT statement:
    INSERT INTO W_JOB_DS(JOB_CODE,JOB_NAME,JOB_DESC,JOB_FAMILY_CODE,JOB_FAMILY_NAME,JOB_FAMILY_DESC,JOB_LEVEL,W_FLSA_STAT_CODE,W_FLSA_STAT_DESC,W_EEO_JOB_CAT_CODE,W_EEO_JOB_CAT_DESC,AAP_JOB_CAT_CODE,AAP_JOB_CAT_NAME,ACTIVE_FLG,CREATED_BY_ID,CHANGED_BY_ID,CREATED_ON_DT,CHANGED_ON_DT,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,AUX3_CHANGED_ON_DT,AUX4_CHANGED_ON_DT,SRC_EFF_FROM_DT,SRC_EFF_TO_DT,DELETE_FLG,DATASOURCE_NUM_ID,INTEGRATION_ID,TENANT_ID,X_CUSTOM) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_JOB_DS]
    WRITER_1_*_1> WRT_8003 Writer initialization complete.
    WRITER_1_*_1> WRT_8005 Writer run started.
    READER_1_1_1> BLKR_16007 Reader run started.
    READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_JobDimension.Sq_Jobs] User specified SQL Query [SELECT
    PER_JOBS.JOB_ID,
    PER_JOBS.BUSINESS_GROUP_ID,
    PER_JOBS.JOB_DEFINITION_ID,
    PER_JOBS.DATE_FROM,
    PER_JOBS.DATE_TO,
    PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT,      PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
    PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
    PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
    PER_JOBS.NAME,
    PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
    PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS.  AS JOB_CODE,
      '0' AS X_CUSTOM
    FROM
    PER_JOBS, PER_JOB_DEFINITIONS
    WHERE
    PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID]
    WRITER_1_*_1> WRT_8158
    *****START LOAD SESSION*****
    Load Start Time: Fri Sep 26 10:53:05 2008
    Target tables:
    W_JOB_DS
    READER_1_1_1> RR_4049 SQL Query issued to database : (Fri Sep 26 10:53:05 2008)
    READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 26 10:53:06 2008]
    READER_1_1_1> RR_4035 SQL Error [
    ORA-01747: invalid user.table.column, table.column, or column specification
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT
    PER_JOBS.JOB_ID,
    PER_JOBS.BUSINESS_GROUP_ID,
    PER_JOBS.JOB_DEFINITION_ID,
    PER_JOBS.DATE_FROM,
    PER_JOBS.DATE_TO,
    PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
    PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
    PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
    PER_JOBS.NAME,
    PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
    PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
    '0' AS X_CUSTOM
    FROM
    PER_JOBS, PER_JOB_DEFINITIONS
    WHERE
    PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID
    Oracle Fatal Error
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT
    PER_JOBS.JOB_ID,
    PER_JOBS.BUSINESS_GROUP_ID,
    PER_JOBS.JOB_DEFINITION_ID,
    PER_JOBS.DATE_FROM,
    PER_JOBS.DATE_TO,
    PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
    PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
    PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
    PER_JOBS.NAME,
    PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
    PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
    '0' AS X_CUSTOM
    FROM
    PER_JOBS, PER_JOB_DEFINITIONS
    WHERE
    PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID
    Oracle Fatal Error].
    READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 26 10:53:06 2008]
    READER_1_1_1> BLKR_16004 ERROR: Prepare failed.
    WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
    WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_JOB_DS] at end of load
    WRITER_1_*_1> WRT_8035 Load complete time: Fri Sep 26 10:53:06 2008
    LOAD SUMMARY
    ============
    WRT_8036 Target: W_JOB_DS (Instance Name: [W_JOB_DS])
    WRT_8044 No data loaded for this target
    WRITER_1__1> WRT_8043 ****END LOAD SESSION*****
    MANAGER> PETL_24031
    ***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
    Thread [READER_1_1_1] created for [the read stage] of partition point [mplt_BC_ORA_JobDimension.Sq_Jobs] has completed. The total run time was insufficient for any meaningful statistics.
    Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [mplt_BC_ORA_JobDimension.Sq_Jobs] has completed. The total run time was insufficient for any meaningful statistics.
    Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_JOB_DS] has completed. The total run time was insufficient for any meaningful statistics.
    MANAGER> PETL_24005 Starting post-session tasks. : (Fri Sep 26 10:53:06 2008)
    MANAGER> PETL_24029 Post-session task completed successfully. : (Fri Sep 26 10:53:06 2008)
    MAPPING> TM_6018 Session [SDE_ORA_JobDimension_Full] run completed with [0] row transformation errors.
    MANAGER> PETL_24002 Parallel Pipeline Engine finished.
    DIRECTOR> PETL_24013 Session run completed with failure.
    DIRECTOR> TM_6022
    SESSION LOAD SUMMARY
    ================================================
    DIRECTOR> TM_6252 Source Load Summary.
    DIRECTOR> CMN_1740 Table: [Sq_Jobs] (Instance Name: [mplt_BC_ORA_JobDimension.Sq_Jobs])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6253 Target Load Summary.
    DIRECTOR> CMN_1740 Table: [W_JOB_DS] (Instance Name: [W_JOB_DS])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6023
    ===================================================
    DIRECTOR> TM_6020 Session [SDE_ORA_JobDimension_Full] completed at [Fri Sep 26 10:53:07 2008]

    To make use of the warehouse you would probably want to connect to an EBS instance in order to populate the warehouse.
    Since the execution plan you intend to run is designed for the EBS data-model. I guess if you really didn't want to connect to the EBS instance to pull data you could build one using the universal adapter. This allows you to load out of flat-files if you wish, but I wouldn't reccomend making this a habit for actual implementation as it does create another potential point of failure (populating the flat-files).
    Thanks,
    Austin

  • Loading through Process Chains 2 Delta Loads and 1 Full Load (ODS to Cube).

    Dear All,
    I am loading through Process chains with 2 Delta Loads and 1 Full load from ODS to Cube in 3.5. Am in the development process.
    My loading process is:
    Start - 2 Delta Loads - 1 Full Load - ODS Activation - Delete Index - Further Update - Delete overlapping requests from infocube - Creating Index.
    My question is:
    When am loading for the first am getting some data and for the next load i should get as Zero as there is no data for the next load but am getting same no of records for the next load. May be it is taking data from full upload, i guess. Please, guide me.
    Krishna.

    Hi,
    The reason you are getting the same no. of records is as you said (Full load), after running the delta you got all the changed records but after those two delta's again you have a full load step which will pick whole of the data all over again.
    The reason you are getting same no. of records is:
    1> You are running the chain for the first time.
    2> You ran this delta ip's for the first time, as such while initializing these deltas you might have choosen "Initialization without data transfer", as such now when you ran these deltas for the first time they picked whole of the data.Running a full load after that will also pick the same no. of records too.
    If the two delats you are talking are one after another then is say u got the data because of some changes, since you are loading for a single ods to a cube both your delta and full will pick same "For the first time " during data marting, for they have the same data source(ODS).
    Hope fully this will serve your purpose and will be expedite.
    Thax & Regards
    Vaibhave Sharma
    Edited by: Vaibhave Sharma on Sep 3, 2008 10:28 PM

  • 0orgunit_att full load failed

    The full load failed. when i analyzed the error is job terminated in source system
    i checked the job overview in source system and there is a dump.
    the dump is
    Runtime Errors         ITAB_DUPLICATE_KEY
    Date and Time          02.02.2009 08:44:59
    Short text
         A row with the same key already exists.
    What happened?
         Error in the ABAP Application Program
         The current ABAP program "SAPLHRMS_BW_PA_OS" had to be terminated because it
          has
         come across a statement that unfortunately cannot be executed.
    What can you do?
         Note down which actions and inputs caused the error.
         To process the problem further, contact you SAP system
         administrator.
         Using Transaction ST22 for ABAP Dump Analysis, you can look
         at and manage termination messages, and you can also
         keep them for a long time.
    Error analysis
         An entry was to be entered into the table
          "\FUNCTION=HR_BW_EXTRACT_IO_ORGUNIT\DATA=MAIN_COSTCENTERS[]" (which should
    have
    had a unique table key (UNIQUE KEY)).
    However, there already existed a line with an identical key.
    The insert-operation could have ocurred as a result of an INSERT- or
    MOVE command, or in conjunction with a SELECT ... INTO.
    The statement "INSERT INITIAL LINE ..." cannot be used to insert several
    initial lines into a table with a unique key.
    to correct the error
    Probably the only way to eliminate the error is to correct the program.
    If the error occures in a non-modified SAP program, you may be able to
    find an interim solution in an SAP Note.
    If you have access to SAP Notes, carry out a search with the following
    keywords:
    "ITAB_DUPLICATE_KEY" " "
    "SAPLHRMS_BW_PA_OS" or "LHRMS_BW_PA_OSU06"
    "HR_BW_EXTRACT_IO_ORGUNIT"
    If you cannot solve the problem yourself and want to send an error
    notification to SAP, include the following information:
    please suggest <removed by moderator>.
    pramod
    Edited by: Siegfried Szameitat on Feb 2, 2009 11:06 AM

    Hi asish,
    Runtime Errors         ITAB_DUPLICATE_KEY
    Date and Time          02.02.2009 06:18:44
       180                i0027_flag         = ' '
       181                ombuffer_mode      = ' '
       182            TABLES
       183                in_objects         = in_objects
       184                main_costcenters   = main_co
       185            EXCEPTIONS
       186                OTHERS             = 0.
       187
       188
    >>>>>         INSERT LINES OF main_co INTO TABLE main_costcenters.
       190
       191         last_plvar = orgunits-plvar.
       192         REFRESH in_objects.
       193         MOVE-CORRESPONDING orgunits TO in_objects.
       194         APPEND in_objects.
       195       ENDIF.
       196
       197     ENDLOOP.
       198   ENDIF.
       199
       200   LOOP AT orgunits.
       201
       202     CLEAR:   infty1000, main_co, infty1008, l_t_hrobject.
       203     REFRESH: infty1000, main_co, infty1008, l_t_hrobject.
       204
       205     APPEND orgunits TO l_t_hrobject.
       206
       207     LOOP AT l_t_i1000_all INTO infty1000 WHERE objid = orgunits-objid.
       208       APPEND infty1000.
    i checked this this reflects a structure. i cannont modify as this is standard table.
    pramod

  • InfoSpoke Delta - Full loads

    Hello All,
    I have an Infospoke that's in PROD with Delta update sourcing from an ODS.
    We have planned a full load.
    With selection option 06-31-2005 to 06-31-2007
    changed the update mode from delta to full and started the full load on
    06-01-2007 and completed the load successfully on 06-09-2007.
    restored the delta once the data is consumed from the /BIC/OHXXXX tables on
    06-22-2007 (full load was huge).
    During this process of full load, ODS was fed with deltas (load to ODS was not stopped).
    Users are complainig delta data is missing during that period (06/01 - 06/09)in which full load was performed.
    Can someone throw some light on why did the delta data is missing only for that period? Is not putting the ODS load on Hold the reason for this failure which messed up the delta pointer.
    Full points assured.

    Hi,
    enabling delta load:
    chk the below link for step by step instructions
    http://help.sap.com/saphelp_nw04s/helpdata/en/44/97433e99ee70dbe10000000a1553f6/frameset.htm
    which describes:1.Delta Load Management Framework Overview
    2.Enabling Delta Load
    3.Initializing Delta Load

  • Difference: Full load, Delta Load & INIT load in BW

    Hi Experts,
    I am new to SAP BW Data warehousing.
    What is the difference between Full load, Delta Load & INIT load in BW.
    Regards
    Praveen

    Hi Pravin,
    Hope the below helps you...
    Full update:
    A full update requests all the data that corresponds with the selection criteria you have determined in the scheduler.
    You can indicate that a request with full update mode is a full repair request using the scheduler menu. This request can be posted to every data target, even if the data target already contains data from an initialization run or delta for this DataSource/source system combination, and has overlapping selection conditions.
    If you use the full repair request to reload the data into the DataStore object after you have selectively deleted data from the DataStore object, note that this can lead to inconsistencies in the data target. If the selections for the repair request do not correspond with the selections you made when selectively deleting data from the DataStore object, posting this request can lead to duplicated data records in the data target.
    Initialization of the delta process
    Initializing the delta process is a precondition for requesting the delta.
    More than one initialization selection is possible for different, non-overlapping selection conditions to initialize the delta process when scheduling InfoPackages for data from an SAP system. This gives you the option of loading the data
    relevant for the delta process to the Business Information Warehouse in steps.
    For example, you could load the data for cost center 1000 to BI in one step and the data for cost center 2000 in another.
    The delta requested after several initializations contains the upper amount of all the successful init selections as the selection criterion. After this, the selection condition for the delta can no longer be changed. In the example, data for cost
    centers 1000 and 2000 were loaded to BI during a delta.
    Delta update
    A delta update requests only the data that has appeared since the last delta. Before you can request a delta update you first have to initialize the delta process. A delta update is only possible for loading from SAP source systems. If a delta update fails (status red in the monitor), or the overall status of the delta request was set to red manually, the next data request is carried out in repeat mode. With a repeat request, the data that was loaded incorrectly or incompletely in the failed delta request is extracted along with the data that has accrued from this point on. A repeat can only be requested in the dialog screen. If the data from the failed delta request has already been updated to data targets, delete the data from the data targets in question. If you do not delete the data from the data targets, this could lead to duplicated data records after the repeat request.
    Repeat delta update
    If the loading process fails, the delta for the DataSource is requested again.
    Early delta initialization : This is a advanced concept, but just for your information ...
    With early delta initialization you can already write the data to the delta queue or to the delta tables in the application while the initialization is requested in the source system. This enables you to initialize the delta process, in other
    words, the init request, without having to stop posting the data in the source system. You can only carry out early delta initialization if this is supported by the extractor for the DataSource that is called in the source system with this data
    request. Extractors that support early delta initialization were first supplied with.

  • Full load works, but delta fails - "Error in the Extractor"

    Good morning,
    We are using datasource 3FI_SL_ZZ_SI (Special Ledger line items) to load a cube, and are having trouble with the delta loads.  If I run a full load, everything runs fine.  If I run a delta load, it will initially fail with an error that simply states "Error in the Extractor" (no long text).  If I repeat the delta load, it completes successfully with 0 records returned.  If I then rerun the delta, I get the error again.
    I've run extractions using RSA3, but they work fine - as I would expect since the full loads work.  Unfortunately, I have not been able to find why the deltas aren't working.  After searching the Forums, I've tried replicating the datasource, checked the job log in R/3 (nothing), and run the program RS_TRANSTRU_ACTIVATE_ALL, all to no avail.
    Any ideas?
    Thanks
    We're running BW 3.5, R/3 4.71

    And it's just that easy....
    Yes, it appears this is what the problem was.  I'd been running the delta init without data transfer, and it was failing during the first true delta run.  Once I changed the delta init so that it transferred data, the deltas worked fine.  This was in our development system.  I took a look in our production system where deltas have been running for quite some time, and it turns out the delta initialization there was done with data transfer. 
    Thank you very much!

Maybe you are looking for