Infospoke Full Loads

Hello BW Gurus,
BW 3.5 does not allow  full loads after delta has been activated. My problem is that now we have to trigger a full load to the informatica client and I cant do this from my delta configured infospoke.
I even created a duplicate ODS which is a copy of the original ODS. I then created a separate 'Full Load' infospoke sourcing this duplicate ODS  and this also does not work.
Does anyone have a workaround for this problem ?
Dorothy

Hi,
You can goto change mode and deactivate delta, and change the delta mode to full update.....
rgds,

Similar Messages

  • InfoSpoke Delta - Full loads

    Hello All,
    I have an Infospoke that's in PROD with Delta update sourcing from an ODS.
    We have planned a full load.
    With selection option 06-31-2005 to 06-31-2007
    changed the update mode from delta to full and started the full load on
    06-01-2007 and completed the load successfully on 06-09-2007.
    restored the delta once the data is consumed from the /BIC/OHXXXX tables on
    06-22-2007 (full load was huge).
    During this process of full load, ODS was fed with deltas (load to ODS was not stopped).
    Users are complainig delta data is missing during that period (06/01 - 06/09)in which full load was performed.
    Can someone throw some light on why did the delta data is missing only for that period? Is not putting the ODS load on Hold the reason for this failure which messed up the delta pointer.
    Full points assured.

    Hi,
    enabling delta load:
    chk the below link for step by step instructions
    http://help.sap.com/saphelp_nw04s/helpdata/en/44/97433e99ee70dbe10000000a1553f6/frameset.htm
    which describes:1.Delta Load Management Framework Overview
    2.Enabling Delta Load
    3.Initializing Delta Load

  • Infospoke error for full load

    Experts
    when I run full load info spoke through process chain ,It failed ,saying sql error1652. Can you please tell me why this is happening and it is happening only since 2 days. Is it b cos of large volume of data ?
    Any advice greatly appreciated.
    Many thanks

    This error occurs due to temp space problem..Unable to extend TEMP segment tablespace...
    You need to alter the tablespace TEMP default storage (pctincrease 1) and immediately after alter the tablespace TEMP default storage(pctincrease 0)
    Check with your DBA to do this..
    Regards
    Manga(Assign points if it helps!!)

  • Open Hub FULL LOAD file split.

    Hi,
    Can I split a large file genearted from Open Hub Services ??
    I have created an Infospoke for one of my existing info cube (whicjh is having full load).Data file extracted into application server is a size of 400MB,can we have any procedure in place to split the file into part's.
    Because, I have custome ABAP prog which picks up the file from application server & put into user drive & because of my file size it can't dump the file from application server to presentation server. Program is running out of internal table memory.
    Please any suggetion.
    Regards,
    Kironmoy.

    Thanks for the post.
    But in my case I Business will run the ABAP prog & download the file.
    So I want to make some configuration in OHS so that it dump data in packetwise,may be I can use th selection screen availble.
    Any suggestion would be appreciated.

  • URGENT!pls hlpDAC Full Load always in 'Running' status at a particular task

    Hi Friends,
    I started a full load yesterday.There are totally 257 tasks.The load went fine without issues till 248th task.But while executing 249th task(Load into Activity Fact),it is always in 'Running' status and is not getting completed even after executing for 2 hours. I checked in the informatica workflow monitor and found that the workflow is in 'running' state and is not getting completed. When right-clicked the session and selected run properties,I can see that 0 rows are inserted into the target table.So I manually tried to stop the workflow.Even after that the task is always in 'Stopping' status and is not getting stopped.Then I manually aborted the workflow.
    Below is the session log file.Could you please check and let me know.
    Regards,
    Vijay
    Edited by: vijayobi on Jul 22, 2011 4:26 AM

    Hi Friends,
    We executed a Full-Load again on Saturday i.e 23rd July 2011.This time we allowed the task 'Load into Activity Fact_CUSTOM' to execute without stopping it manully like we did in the previous data load.It got executed for 3 hours and 45 minutes and then 'Failed' giving the following error ORA-01652(unable to extend temp segment by string in tablespace string).This task got executed successfully in our dev environment.Below is what we found in the sessio .log file and help us resolve this issue.Please revert back as soon as possible as we have this issue in our prod environment.
    2011-07-23 14:56:07 : ERROR : (8128 | LKPDP_25:READER_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : RR_4035 : SQL Error [
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT distinct LOOKUP_TABLE.ROW_WID AS ROW_WID, LOOKUP_TABLE.GEO_WID AS GEO_WID, LOOKUP_TABLE.INTEGRATION_ID AS INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT AS EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT AS EFFECTIVE_TO_DT FROM W_PARTY_D LOOKUP_TABLE,W_ACTIVITY_FS LEFT OUTER JOIN W_CUSTOMER_ACCOUNT_DON (W_ACTIVITY_FS.CUSTOMER_ACCOUNT_ID=W_CUSTOMER_ACCOUNT_D.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=W_CUSTOMER_ACCOUNT_D.DATASOURCE_NUM_ID)WHERECOALESCE(W_ACTIVITY_FS.CUSTOMER_ID,W_CUSTOMER_ACCOUNT_D.PARTY_ID)=LOOKUP_TABLE.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=LOOKUP_TABLE.DATASOURCE_NUM_IDAND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) >= LOOKUP_TABLE.EFFECTIVE_FROM_DT AND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) < LOOKUP_TABLE.EFFECTIVE_TO_DTORDER BY LOOKUP_TABLE.INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT, LOOKUP_TABLE.ROW_WID, LOOKUP_TABLE.GEO_WID -- ORDER BY INTEGRATION_ID,DATASOURCE_NUM_ID,EFFECTIVE_FROM_DT,EFFECTIVE_TO_DT,ROW_WID,GEO_WID
    Oracle Fatal Error
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT distinct LOOKUP_TABLE.ROW_WID AS ROW_WID, LOOKUP_TABLE.GEO_WID AS GEO_WID, LOOKUP_TABLE.INTEGRATION_ID AS INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT AS EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT AS EFFECTIVE_TO_DT FROM W_PARTY_D LOOKUP_TABLE,W_ACTIVITY_FS LEFT OUTER JOIN W_CUSTOMER_ACCOUNT_DON (W_ACTIVITY_FS.CUSTOMER_ACCOUNT_ID=W_CUSTOMER_ACCOUNT_D.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=W_CUSTOMER_ACCOUNT_D.DATASOURCE_NUM_ID)WHERECOALESCE(W_ACTIVITY_FS.CUSTOMER_ID,W_CUSTOMER_ACCOUNT_D.PARTY_ID)=LOOKUP_TABLE.INTEGRATION_IDAND W_ACTIVITY_FS.DATASOURCE_NUM_ID=LOOKUP_TABLE.DATASOURCE_NUM_IDAND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) >= LOOKUP_TABLE.EFFECTIVE_FROM_DT AND COALESCE(W_ACTIVITY_FS.PLANNED_START_DT,W_ACTIVITY_FS.CREATED_DT) < LOOKUP_TABLE.EFFECTIVE_TO_DTORDER BY LOOKUP_TABLE.INTEGRATION_ID, LOOKUP_TABLE.DATASOURCE_NUM_ID, LOOKUP_TABLE.EFFECTIVE_FROM_DT, LOOKUP_TABLE.EFFECTIVE_TO_DT, LOOKUP_TABLE.ROW_WID, LOOKUP_TABLE.GEO_WID -- ORDER BY INTEGRATION_ID,DATASOURCE_NUM_ID,EFFECTIVE_FROM_DT,EFFECTIVE_TO_DT,ROW_WID,GEO_WID
    Oracle Fatal Error].
    2011-07-23 14:56:07 : ERROR : (8128 | LKPDP_25:READER_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : BLKR_16004 : ERROR: Prepare failed.
    2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8333 : Rolling back all the targets due to fatal session error.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_PARTY_D_With_Geo_Wid], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXP_Decode_CustomerId], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXP_Decode_CustomerId], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_CUSTOMER_ACCOUNT_D_With_Party_ID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.LKP_W_CUSTOMER_ACCOUNT_D_With_Party_ID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXPTRANS], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [mplt_SIL_ActivityFact.EXPTRANS], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [FIL_ETL_PROC_WID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [FIL_ETL_PROC_WID], and the session is terminating.
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [MPLT_Get_ETL_Proc_WID.Exp_Decide_Etl_Proc_Wid], and the session is terminating.
    2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8325 : Final rollback executed for the target [W_ACTIVITY_F] at end of load
    2011-07-23 14:56:07 : ERROR : (8128 | TRANSF_1_1_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : TM_6085 : A fatal error occurred at transformation [MPLT_Get_ETL_Proc_WID.Exp_Decide_Etl_Proc_Wid], and the session is terminating.
    2011-07-23 14:56:07 : INFO : (8128 | MANAGER) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : PETL_24007 : Received request to stop session run. Attempting to stop worker threads.
    2011-07-23 14:56:07 : INFO : (8128 | WRITER_1_*_1) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_MACAW : WRT_8035 : Load complete time: Sat Jul 23 14:56:07 2011
    Thanks in advance.
    Vinay

  • Project Analytics 7.9.6.1 - Error while running a full load

    Hi All,
    I am performing a full load for Projects Analytics and get the following error,
    =====================================
    ERROR OUTPUT
    =====================================
    1103 SEVERE Wed Nov 18 02:49:36 WST 2009 Could not attach to workflow because of errorCode 36331 For workflow SDE_ORA_CodeDimension_Gl_Account
    1104 SEVERE Wed Nov 18 02:49:36 WST 2009
    ANOMALY INFO::: Error while executing : INFORMATICA TASK:SDE_ORA11510_Adaptor:SDE_ORA_CodeDimension_Gl_Account:(Source : FULL Target : FULL)
    MESSAGE:::
    Irrecoverable Error
    Error while contacting Informatica server for getting workflow status for SDE_ORA_CodeDimension_Gl_Account
    Error Code = 36331:Unknown reason for error code 36331
    Pmcmd output :
    Session log initialises NULL value to mapping parameter MPLT_ADI_CODES.$$CATEGORY. This is then used insupsequent SQL and results in ORA-00936: missing expression error following are the initialization section and the load section containing the error in the log
    Initialisation
    DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
    DIRECTOR> VAR_27028 Use override value [ORA_11_5_10] for session parameter:[$DBConnection_OLTP].
    DIRECTOR> VAR_27028 Use override value [ORA_11_5_10.DATAWAREHOUSE.SDE_ORA11510_Adaptor.SDE_ORA_CodeDimension_Gl_Account_Segments.log] for session parameter:[$PMSessionLogFile].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[MPLT_ADI_CODES.$$CATEGORY].
    DIRECTOR> VAR_27028 Use override value [4] for mapping parameter:[MPLT_SA_ORA_CODES.$$DATASOURCE_NUM_ID].
    DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[MPLT_SA_ORA_CODES.$$TENANT_ID].
    DIRECTOR> TM_6014 Initializing session [SDE_ORA_CodeDimension_Gl_Account_Segments] at [Wed Nov 18 02:49:11 2009].
    DIRECTOR> TM_6683 Repository Name: [repo_service]
    DIRECTOR> TM_6684 Server Name: [int_service]
    DIRECTOR> TM_6686 Folder: [SDE_ORA11510_Adaptor]
    DIRECTOR> TM_6685 Workflow: [SDE_ORA_CodeDimension_Gl_Account_Segments] Run Instance Name: [] Run Id: [17]
    DIRECTOR> TM_6101 Mapping name: SDE_ORA_CodeDimension_GL_Account_Segments [version 1].
    DIRECTOR> TM_6963 Pre 85 Timestamp Compatibility is Enabled
    DIRECTOR> TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS]
    DIRECTOR> TM_6827 [C:\Informatica\PowerCenter8.6.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_CodeDimension_Gl_Account_Segments].
    DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR> TM_6703 Session [SDE_ORA_CodeDimension_Gl_Account_Segments] is run by 32-bit Integration Service [node01_ASG596138], version [8.6.1], build [1218].
    MANAGER> PETL_24058 Running Partition Group [1].
    MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
    MANAGER> PETL_24001 Parallel Pipeline Engine running.
    MANAGER> PETL_24003 Initializing session run.
    MAPPING> CMN_1569 Server Mode: [UNICODE]
    MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6151 The session sort order is [Binary].
    MAPPING> TM_6185 Warning. Code page validation is disabled in this session.
    MAPPING> TM_6156 Using low precision processing.
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6307 DTM error log disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> DBG_21075 Connecting to database [orcl], user [DAC_REP]
    MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_Master_Map] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_Master_Code] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> CMN_1716 Lookup [mplt_ADI_Codes.Lkp_W_CODE_D] uses database connection [Relational:DataWarehouse] in code page [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_CodeDimension_Gl_Account_Segments]
    DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
    MANAGER> PETL_24004 Starting pre-session tasks. : (Wed Nov 18 02:49:14 2009)
    MANAGER> PETL_24027 Pre-session task completed successfully. : (Wed Nov 18 02:49:14 2009)
    DIRECTOR> PETL_24006 Starting data movement.
    MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 128000 bytes.
    READER_1_1_1> DBG_21438 Reader: Source is [asgdev], user [APPS]
    READER_1_1_1> BLKR_16051 Source database connection [ORA_11_5_10] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    READER_1_1_1> BLKR_16003 Initialization completed successfully.
    WRITER_1_*_1> WRT_8147 Writer: Target is database [orcl], user [DAC_REP], bulk mode [OFF]
    WRITER_1_*_1> WRT_8221 Target database connection [DataWarehouse] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL INSERT statement:
    INSERT INTO W_CODE_D(DATASOURCE_NUM_ID,SOURCE_CODE,SOURCE_CODE_1,SOURCE_CODE_2,SOURCE_CODE_3,SOURCE_NAME_1,SOURCE_NAME_2,CATEGORY,LANGUAGE_CODE,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,W_UPDATE_DT,TENANT_ID) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL UPDATE statement:
    UPDATE W_CODE_D SET SOURCE_CODE_1 = ?, SOURCE_CODE_2 = ?, SOURCE_CODE_3 = ?, SOURCE_NAME_1 = ?, SOURCE_NAME_2 = ?, MASTER_DATASOURCE_NUM_ID = ?, MASTER_CODE = ?, MASTER_VALUE = ?, W_INSERT_DT = ?, W_UPDATE_DT = ?, TENANT_ID = ? WHERE DATASOURCE_NUM_ID = ? AND SOURCE_CODE = ? AND CATEGORY = ? AND LANGUAGE_CODE = ?
    WRITER_1_*_1> WRT_8124 Target Table W_CODE_D :SQL DELETE statement:
    DELETE FROM W_CODE_D WHERE DATASOURCE_NUM_ID = ? AND SOURCE_CODE = ? AND CATEGORY = ? AND LANGUAGE_CODE = ?
    WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_CODE_D]
    WRITER_1_*_1> WRT_8003 Writer initialization complete.
    READER_1_1_1> BLKR_16007 Reader run started.
    WRITER_1_*_1> WRT_8005 Writer run started.
    WRITER_1_*_1> WRT_8158
    Load section
    *****START LOAD SESSION*****
    Load Start Time: Wed Nov 18 02:49:16 2009
    Target tables:
    W_CODE_D
    READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_Codes_GL_Account_Segments.Sq_Fnd_Flex_Values] User specified SQL Query [SELECT
    FND_FLEX_VALUES.FLEX_VALUE_SET_ID,
    FND_FLEX_VALUES.FLEX_VALUE,
    MAX(FND_FLEX_VALUES_TL.DESCRIPTION),
    FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM,
    FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME
    FROM
    FND_FLEX_VALUES,
    FND_FLEX_VALUES_TL,
    FND_ID_FLEX_SEGMENTS,
    FND_SEGMENT_ATTRIBUTE_VALUES
    WHERE
    FND_FLEX_VALUES.FLEX_VALUE_ID = FND_FLEX_VALUES_TL.FLEX_VALUE_ID AND FND_FLEX_VALUES_TL.LANGUAGE ='US' AND
    FND_ID_FLEX_SEGMENTS.FLEX_VALUE_SET_ID =FND_FLEX_VALUES.FLEX_VALUE_SET_ID AND
    FND_ID_FLEX_SEGMENTS.APPLICATION_ID = 101 AND
    FND_ID_FLEX_SEGMENTS.ID_FLEX_CODE ='GL#' AND
    FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM =FND_SEGMENT_ATTRIBUTE_VALUES.ID_FLEX_NUM AND
    FND_SEGMENT_ATTRIBUTE_VALUES.APPLICATION_ID =101 AND
    FND_SEGMENT_ATTRIBUTE_VALUES.ID_FLEX_CODE = 'GL#' AND
    FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME=FND_SEGMENT_ATTRIBUTE_VALUES.APPLICATION_COLUMN_NAME AND
    FND_SEGMENT_ATTRIBUTE_VALUES.ATTRIBUTE_VALUE ='Y'
    GROUP BY
    FND_FLEX_VALUES.FLEX_VALUE_SET_ID,
    FND_FLEX_VALUES.FLEX_VALUE,
    FND_ID_FLEX_SEGMENTS.ID_FLEX_NUM,
    FND_ID_FLEX_SEGMENTS.APPLICATION_COLUMN_NAME]
    READER_1_1_1> RR_4049 SQL Query issued to database : (Wed Nov 18 02:49:17 2009)
    READER_1_1_1> RR_4050 First row returned from database to reader : (Wed Nov 18 02:49:17 2009)
    LKPDP_3> DBG_21312 Lookup Transformation [mplt_ADI_Codes.Lkp_W_CODE_D]: Lookup override sql to create cache: SELECT W_CODE_D.SOURCE_NAME_1 AS SOURCE_NAME_1, W_CODE_D.SOURCE_NAME_2 AS SOURCE_NAME_2, W_CODE_D.MASTER_DATASOURCE_NUM_ID AS MASTER_DATASOURCE_NUM_ID, W_CODE_D.MASTER_CODE AS MASTER_CODE, W_CODE_D.MASTER_VALUE AS MASTER_VALUE, W_CODE_D.W_INSERT_DT AS W_INSERT_DT, W_CODE_D.TENANT_ID AS TENANT_ID, W_CODE_D.DATASOURCE_NUM_ID AS DATASOURCE_NUM_ID, W_CODE_D.SOURCE_CODE AS SOURCE_CODE, W_CODE_D.CATEGORY AS CATEGORY, W_CODE_D.LANGUAGE_CODE AS LANGUAGE_CODE FROM W_CODE_D
    WHERE
    W_CODE_D.CATEGORY IN () ORDER BY DATASOURCE_NUM_ID,SOURCE_CODE,CATEGORY,LANGUAGE_CODE,SOURCE_NAME_1,SOURCE_NAME_2,MASTER_DATASOURCE_NUM_ID,MASTER_CODE,MASTER_VALUE,W_INSERT_DT,TENANT_ID
    LKPDP_3> TE_7212 Increasing [Index Cache] size for transformation [mplt_ADI_Codes.Lkp_W_CODE_D] from [1000000] to [4734976].
    LKPDP_3> TE_7212 Increasing [Data Cache] size for transformation [mplt_ADI_Codes.Lkp_W_CODE_D] from [2000000] to [2007040].
    READER_1_1_1> BLKR_16019 Read [625] rows, read [0] error rows for source table [FND_ID_FLEX_SEGMENTS] instance name [mplt_BC_ORA_Codes_GL_Account_Segments.FND_ID_FLEX_SEGMENTS]
    READER_1_1_1> BLKR_16008 Reader run completed.
    LKPDP_3> TM_6660 Total Buffer Pool size is 609824 bytes and Block size is 65536 bytes.
    LKPDP_3:READER_1_1> DBG_21438 Reader: Source is [orcl], user [DAC_REP]
    LKPDP_3:READER_1_1> BLKR_16051 Source database connection [DataWarehouse] code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    LKPDP_3:READER_1_1> BLKR_16003 Initialization completed successfully.
    LKPDP_3:READER_1_1> BLKR_16007 Reader run started.
    LKPDP_3:READER_1_1> RR_4049 SQL Query issued to database : (Wed Nov 18 02:49:18 2009)
    LKPDP_3:READER_1_1> CMN_1761 Timestamp Event: [Wed Nov 18 02:49:18 2009]
    LKPDP_3:READER_1_1> RR_4035 SQL Error [
    ORA-00936: missing expression
    Could you please suggest what the issue might be and how it can be fixed?
    Many thanks,
    Kiran

    I have continued related detains in the following thread,
    Mapping Parameter  $$CATEGORY not included in the parameter file (7.9.6.1)
    Apologies for the inconvenience.
    Thanks,
    Kiran

  • What is the diffrence between full load and delta load in DTP

    hI ,
    I am trying to load the data into CUBE from another cube using DTP ..
    There are 2 DTPS ..
    1: DTP with full load
    2: DTP with DELTA load ..
    what is the diffrence betwen thse two in DTP ...
    Please can somebody help me

    1: DTP with full load  - will update all the requests in PSA/source to the target,
    2: DTP with DELTA load - will update only new requests to the datatarget
    The system doesnt distinguish new records on the basis of changed records, rather by the request. Thats the reason you have datamart status to indicate if the request has been loaded to further datatargets.

  • Full load from a DSO to a cube processes less records than available in DSO

    We have a scenario, where every Sunday I have to make a full load from a DSO with OnHand Stock information to a cube, where I register on material and stoer level a counter if there is stock available.
    The DTP has no filters at all and has a semantic group on 0MATERIAL and 0PLANT.
    The key in the DSO is:
    0MATERIAL
    0PLANT
    0STOCKTYPE
    0STOR_LOC
    0BOM
    of which only 0MATERIAL, 0PLANT and 0STORE_LOC are later used in the transformation.
    As we had a growing number of records, we decided to delete in the START routine all records, where the inventory is not GT zero, thus eliminating zero and negative inventory records.
    Now comes the funny part of the story:
    Prior to these changes I would [in a test system, just copied from PROD] read some 33 million of records and write out the same amount of records. Of course, after the change we expected to write out less. To my total surprise I was reading now however 45 million of records with the same unchanged DTP, and writing out the expected less records.
    When checking the number of records in the DSO I found the 45 million, but cannot explain why in the loads before we only retrieved some 33 millions from the same unchanged amount of records.
    When checking in PROD - same result: we have some 45 million records in the DSO, but when we do the full load from the DSO to the cube the DTP only processes some 33 millions.
    What am I missing - is there a compression going on? Why would the amount of records in a DSO differ from the amount of records processed in the DataPackgages when I am making a FULL load without any filter restrictions and only a semantic grouping in place for parts of the DSO key?
    ANY idea, thought is appreciated.

    Thanks Gaurav.
    I did check if there were more/any loads doen inbetween - there were none in the test system.  As I mentioned that it was a new copy from PROD to TEST, I compared the number of entries in the DSO and that seems to be a match between TEST and PROD, ok a some more in PROD but they can be accounted for. In test I loaded the day before the changes were imported to have a comparison, and between that load and the one ofter the changes were imported nothing in the DSO was changed.
    Both DTPs in TEST and PW2 load from actived DSO [without archive]. The DTPs were not changed in quite a while - so I ruled that one out. Same with activation of data in the DSO - this DSO get's loaded and activated in PROD daily via process chain and we load daily deltas into the cube in question. Only on Sundays, for the begin of the new week/fiscal period, we need to make a full load to capture all materials per site with inventory. The deltas loaded during the week are less than 1 million, but the difference between the number of records in the DSO and the amount processed in the DataPackages is more than 10 millions per full load even in PROD.
    I really appreciated the knowledgable answer, I just wished you would pointed out something that I missed out on.

  • CPU only clocks up to 1.2GHz on full load

    I use Boot Camp on my Late 2007 17" 2.4GHz Macbook Pro, with fresh installs of Snow Leopard and Windows 7. I've used Windows 7 to diagnose my problem. I'm using Core Temp to retrieve my CPU performance info and Prime95 to stress-test the hardware.
    Definite Problem:
    My CPU will not operate at any higher than 1.2GHz under full load. It idles at 800MHz at 42 degrees Celsius, and warms up to 62 degrees Celsius under full load. I expect it to operate at 2.4GHz like it once did.
    Best-Guess Diagnosis:
    My computer has no battery (it physically expanded beyond functional size), and I read somewhere that the computer is designed to throttle down when no battery is present in order to prevent the computer from drawing more power than is available to it via the 85 watt charger.
    Best-Guess Solutions:
    1) If my diagnosis is correct, replacing the battery should undo the throttling cap. This is not ideal as it is not a long-term solution (batteries can continue to explode in the future).
    2) I'd like to modify my computer's configuration to release the throttling cap/force full frequency. If possible, this would be the ideal long-term solution. This may not be possible if the charger is indeed bottlenecking my CPU's performance.
    I'm looking for insight into my best-guess solutions or new solutions. Thanks for taking time to check out my inquiry.

    Please carefully read and do both iMac SMC and PRAM reset s, if you need to do each 2X. It also may help to restart in Safe Mode to clear some caches.

  • Error in process chain for PCA full load

    Hello everyone,
    I'm trying to use a process chain in order to delete a previous full load of plan data in a cube prior to the new load (to avoid double records). The successor job in the process chain is loading a delta of actual data into the same cube (same info source).
    When executing the process chain (and the included info package (full load)), the setting "Automatic loading of similar/identical requests from info cube" in the info package is not working (I have ticked "full or init, data-/infosource are the same")...
    I have checked that the function itself works as I have executed the info package manually with success. So the problem is the chain somehow.
    In the chain I just execute the info package as usual... so to my understanding, it should work the same way as if I executed it manually. Or am I wrong? Is some additional setting required in the chain in order to make it work?
    Any ideas?
    Thanks,
    Fredrik

    Hi Fredrik,
    not all settings in infopackages work in chains in the same way they do while running the package manually. Mostly you can check that with pressing F1 on the setting. In your case, you need to add a process type for deleting the data to the chain. In your chain maintenance, look at process types and then in load processes .... There you will find the type you need.
    kind regards
    Siggi

  • Errors: ORA-00054 & ORA-01452 while running DAC Full Load

    Hi Friends,
    Previously, I ran full load...it went well. And, I did some sample reports also in BI APPS 7.9.6.2
    Now, I modified few parameters as per the Business and I try to run Full Load again...But I struck with few similar errors. I cleared couple of DB Errors.
    Please, help me out to solve the below errors.
    1. ANOMALY INFO::: Error while executing : TRUNCATE TABLE:W_SALES_BOOKING_LINE_F
    MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DataWarehouse:TRUNCATE TABLE W_SALES_BOOKING_LINE_F
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    --I checked W_SALES_BOOKING_LINE_F, it contain s data.
    2. ANOMALY INFO::: Error while executing : CREATE INDEX:W_GL_REVN_F:W_GL_REVN_F_U1
    MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
         W_GL_REVN_F_U1
    ON
         W_GL_REVN_F
         INTEGRATION_ID ASC
         ,DATASOURCE_NUM_ID ASC
    NOLOGGING
    with error DataWarehouse:CREATE UNIQUE INDEX
         W_GL_REVN_F_U1
    ON
         W_GL_REVN_F
         INTEGRATION_ID ASC
         ,DATASOURCE_NUM_ID ASC
    NOLOGGING
    ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
    -- Yes, I found duplicate values in this table W_GL_REVN_F. But, how can I rectify it. I did some engineering, but failed.
    please tell me the steps to acheive....
    Thanks in advance..
    Stone

    Hi, Please see the answers (in bold) below.
    1. ANOMALY INFO::: Error while executing : TRUNCATE TABLE:W_SALES_BOOKING_LINE_F
    MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DataWarehouse:TRUNCATE TABLE W_SALES_BOOKING_LINE_F
    ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
    --I checked W_SALES_BOOKING_LINE_F, it contain s data.
    Just restart the load, It seems like your DB processes are busy and the table still has a  lock on it which means something is not yet Commited/Rolled Back.
    If this issue repeats you can mail your DBA and ask him to look in to the issue
    2. ANOMALY INFO::: Error while executing : CREATE INDEX:W_GL_REVN_F:W_GL_REVN_F_U1
    MESSAGE:::java.lang.Exception: Error while execution : CREATE UNIQUE INDEX
    W_GL_REVN_F_U1
    ON
    W_GL_REVN_F
    INTEGRATION_ID ASC
         ,DATASOURCE_NUM_ID ASC
         NOLOGGING
         with error DataWarehouse:CREATE UNIQUE INDEX
         W_GL_REVN_F_U1
         ON
         W_GL_REVN_F
         INTEGRATION_ID ASC
         ,DATASOURCE_NUM_ID ASC
         NOLOGGING
         ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
         -- Yes, I found duplicate values in this table W_GL_REVN_F. But, how can I rectify it. I did some engineering, but failed.
         please tell me the steps to achieve....
    please execute this sql and get the duplicate values. If the count is less you can delete the records based on ROW_WID
    How  many duplicates do you have in total?
    *1. SELECT INTEGRATION_ID,DATASOURCE_NUM_ID,count(*) FROM W_GL_REVN_F*
    GROUP BY INTEGRATION_ID, DATASOURCE_NUM_ID
    HAVING COUNT()>1*
    *2. SELECT ROW_WID,DATASOURCE_NUM_ID,INTEGRATION_ID FROM W_GL_REVN_F*
    WHERE INTEGRATION_ID= (from 1st query)
    *3. DELETE from W_GL_REVN_F where ROW_WID=( from 2nd query)*
    Hope this helps !!

  • Error in Source System while running the Full Load

    HI Experts,
    We are using 0CRM_UT_SRV_CONT_I datasource from CRM to extract data.
    It is giving error while running the full load as 'Error Occured in Source system' with messaage Number : RSM340.
    I  want to mention certain points here:
    1) In RSA3 its running fine.
    2) Connection between BW and CRM system is OK.
    3) We have replicated datasource and send active veriosn from Dev system.
    4) When I run Init without data, Init set sucessful with entry in RSA7 in CRM.
    5) Delta run fine with green status.
    6) As source system is static client, hence do data for delta.
    The Main issue happens when we run either the Full load or Init With Data transfer.
    Kindly suggest.
    Thanks
    Mayank

    Hi Mayank,
    see in source system (CRM) with transaction SM21 and ST22 the error log and and then you let us know.
    Charly

  • Full Load: Error while executing :TRUNCATE TABLE: S_ETL_PARAM

    Hi All,
    We are using Bi Apps 7.9.6.1. Full Load was running fine. But Now we are facing a problem with truncating a table "S_ETL_PARAM".
    I have restart informatica Server And also DAC Srever. But still I am getting the same in the DAC Log as, *"NOMALY INFO::: Error while executing : TRUNCATE TABLE:S_ETL_PARAM*
    *MESSAGE:::com.siebel.etl.database.IllegalSQLQueryException: DBConnection_OLTP:SIEBTRUN ('siebel.S_ETL_PARAM')*
    *Values :*
    *Null Map*
    *EXCEPTION CLASS::: java.lang.Exception"*
    Any Suggestion.....
    Thanks in Advance,
    Deepak

    are you trying to run incremental load when you get this truncate error? can you re-run full load and see that still runs ok? pls also check your DW side database logs like alert lor any DB level issue. such errors do not throw friendly messages in DAC/Informatica side.

  • Reg. Error while Full load - process chain

    Hi,
    I am running a process chain, which is a full load. In Manage screen it is in RED status whereas, While i chekced in Monitor screen it is showing as "Missing messages" . While I checked in the error and warning messages, I found that the data packet should be run manually. So I tried Manual update option by killing the job manually. After doing this i tried to run the same process with "Back to Technical Status" option. While performin this again i got an error saying  "REQUXXXX terminated, as data packet 0000XX could not be locked"
    How this error can be rectified.
    Please advice.
    Thanks in advance!!!
    Regards,
    Melissa

    Hi Melissa.....
    While i chekced in Monitor screen it is showing as "Missing messages"........where did u got the message.............in the data packet.......right...........?..........
    I found that the data packet should be run manually. So I tried Manual update option by killing the job manually..................which job u hav killed........? Extraction Job?If u hav killed the Extraction job........then Manual update is not possible..............Manual update is only possible when the Extraction job will get completed successfully........
    Due to this u r getting the Error Message error saying "REQUXXXX terminated, as data packet 0000XX could not be locked"....................When the request itself does'nt exist............how could u do Manual update for that deletedrequuest........
    Then only option is..............make the QM status red.............delete the request from the target and repeat the load.........
    Hope this helps.......
    Regards,
    Debjani........

  • Task fails while running Full load ETL

    Hi All,
    I am running full load ETL For Oracle R12(vanila Instance) HR But 4 tasks are failing SDE_ORA_JobDimention, SDE_ORA_HRPositionDimention, SDE_ORA_CodeDimension_Pay_level and SDE_ORA_CodeDimensionJob, I changed the parameter for all these task as mentioned in the Installation guide and rebuilled. Please help me out.
    Log is like this for SDE_ORA_JobDimention
    DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
    DIRECTOR> VAR_27028 Use override value [ORA_R12] for session parameter:[$DBConnection_OLTP].
    DIRECTOR> VAR_27028 Use override value [9] for mapping parameter:[$$DATASOURCE_NUM_ID].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$JOBCODE_FLXFLD_SEGMENT_COL].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$JOBFAMILYCODE_FLXFLD_SEGMENT_COL].
    DIRECTOR> VAR_27027 Use default value [] for mapping parameter:[mplt_BC_ORA_JobDimension.$$LAST_EXTRACT_DATE].
    DIRECTOR> VAR_27028 Use override value [DEFAULT] for mapping parameter:[$$TENANT_ID].
    DIRECTOR> TM_6014 Initializing session [SDE_ORA_JobDimension_Full] at [Fri Sep 26 10:52:05 2008]
    DIRECTOR> TM_6683 Repository Name: [Oracle_BI_DW_Base]
    DIRECTOR> TM_6684 Server Name: [Oracle_BI_DW_Base_Integration_Service]
    DIRECTOR> TM_6686 Folder: [SDE_ORAR12_Adaptor]
    DIRECTOR> TM_6685 Workflow: [SDE_ORA_JobDimension_Full]
    DIRECTOR> TM_6101 Mapping name: SDE_ORA_JobDimension [version 1]
    DIRECTOR> TM_6827 [C:\Informatica\PowerCenter8.1.1\server\infa_shared\Storage] will be used as storage directory for session [SDE_ORA_JobDimension_Full].
    DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
    DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR> TM_6703 Session [SDE_ORA_JobDimension_Full] is run by 32-bit Integration Service [node01_HSCHBSCGN20031], version [8.1.1], build [0831].
    MANAGER> PETL_24058 Running Partition Group [1].
    MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
    MANAGER> PETL_24001 Parallel Pipeline Engine running.
    MANAGER> PETL_24003 Initializing session run.
    MAPPING> CMN_1569 Server Mode: [ASCII]
    MAPPING> CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    MAPPING> TM_6151 Session Sort Order: [Binary]
    MAPPING> TM_6156 Using LOW precision decimal arithmetic
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6307 DTM Error Log Disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_JobDimension_Full]
    DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
    MANAGER> PETL_24004 Starting pre-session tasks. : (Fri Sep 26 10:52:13 2008)
    MANAGER> PETL_24027 Pre-session task completed successfully. : (Fri Sep 26 10:52:14 2008)
    DIRECTOR> PETL_24006 Starting data movement.
    MAPPING> TM_6660 Total Buffer Pool size is 32000000 bytes and Block size is 1280000 bytes.
    READER_1_1_1> DBG_21438 Reader: Source is [dev], user [apps]
    READER_1_1_1> BLKR_16003 Initialization completed successfully.
    WRITER_1_*_1> WRT_8146 Writer: Target is database [orcl], user [obia], bulk mode [ON]
    WRITER_1_*_1> WRT_8106 Warning! Bulk Mode session - recovery is not guaranteed.
    WRITER_1_*_1> WRT_8124 Target Table W_JOB_DS :SQL INSERT statement:
    INSERT INTO W_JOB_DS(JOB_CODE,JOB_NAME,JOB_DESC,JOB_FAMILY_CODE,JOB_FAMILY_NAME,JOB_FAMILY_DESC,JOB_LEVEL,W_FLSA_STAT_CODE,W_FLSA_STAT_DESC,W_EEO_JOB_CAT_CODE,W_EEO_JOB_CAT_DESC,AAP_JOB_CAT_CODE,AAP_JOB_CAT_NAME,ACTIVE_FLG,CREATED_BY_ID,CHANGED_BY_ID,CREATED_ON_DT,CHANGED_ON_DT,AUX1_CHANGED_ON_DT,AUX2_CHANGED_ON_DT,AUX3_CHANGED_ON_DT,AUX4_CHANGED_ON_DT,SRC_EFF_FROM_DT,SRC_EFF_TO_DT,DELETE_FLG,DATASOURCE_NUM_ID,INTEGRATION_ID,TENANT_ID,X_CUSTOM) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_JOB_DS]
    WRITER_1_*_1> WRT_8003 Writer initialization complete.
    WRITER_1_*_1> WRT_8005 Writer run started.
    READER_1_1_1> BLKR_16007 Reader run started.
    READER_1_1_1> RR_4029 SQ Instance [mplt_BC_ORA_JobDimension.Sq_Jobs] User specified SQL Query [SELECT
    PER_JOBS.JOB_ID,
    PER_JOBS.BUSINESS_GROUP_ID,
    PER_JOBS.JOB_DEFINITION_ID,
    PER_JOBS.DATE_FROM,
    PER_JOBS.DATE_TO,
    PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT,      PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
    PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
    PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
    PER_JOBS.NAME,
    PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
    PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS.  AS JOB_CODE,
      '0' AS X_CUSTOM
    FROM
    PER_JOBS, PER_JOB_DEFINITIONS
    WHERE
    PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID]
    WRITER_1_*_1> WRT_8158
    *****START LOAD SESSION*****
    Load Start Time: Fri Sep 26 10:53:05 2008
    Target tables:
    W_JOB_DS
    READER_1_1_1> RR_4049 SQL Query issued to database : (Fri Sep 26 10:53:05 2008)
    READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 26 10:53:06 2008]
    READER_1_1_1> RR_4035 SQL Error [
    ORA-01747: invalid user.table.column, table.column, or column specification
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT
    PER_JOBS.JOB_ID,
    PER_JOBS.BUSINESS_GROUP_ID,
    PER_JOBS.JOB_DEFINITION_ID,
    PER_JOBS.DATE_FROM,
    PER_JOBS.DATE_TO,
    PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
    PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
    PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
    PER_JOBS.NAME,
    PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
    PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
    '0' AS X_CUSTOM
    FROM
    PER_JOBS, PER_JOB_DEFINITIONS
    WHERE
    PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID
    Oracle Fatal Error
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT
    PER_JOBS.JOB_ID,
    PER_JOBS.BUSINESS_GROUP_ID,
    PER_JOBS.JOB_DEFINITION_ID,
    PER_JOBS.DATE_FROM,
    PER_JOBS.DATE_TO,
    PER_JOBS.LAST_UPDATE_DATE AS CHANGED_ON_DT, PER_JOBS.LAST_UPDATED_BY, PER_JOBS.CREATED_BY, PER_JOBS.CREATION_DATE,
    PER_JOB_DEFINITIONS.LAST_UPDATE_DATE AS AUX1_CHANGED_ON_DT,
    PER_JOB_DEFINITIONS.JOB_DEFINITION_ID,
    PER_JOBS.NAME,
    PER_JOBS.JOB_INFORMATION1, PER_JOBS.JOB_INFORMATION3,
    PER_JOBS. AS JOB_FAMILY_CODE, PER_JOB_DEFINITIONS. AS JOB_CODE,
    '0' AS X_CUSTOM
    FROM
    PER_JOBS, PER_JOB_DEFINITIONS
    WHERE
    PER_JOBS.JOB_DEFINITION_ID = PER_JOB_DEFINITIONS.JOB_DEFINITION_ID
    Oracle Fatal Error].
    READER_1_1_1> CMN_1761 Timestamp Event: [Fri Sep 26 10:53:06 2008]
    READER_1_1_1> BLKR_16004 ERROR: Prepare failed.
    WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
    WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_JOB_DS] at end of load
    WRITER_1_*_1> WRT_8035 Load complete time: Fri Sep 26 10:53:06 2008
    LOAD SUMMARY
    ============
    WRT_8036 Target: W_JOB_DS (Instance Name: [W_JOB_DS])
    WRT_8044 No data loaded for this target
    WRITER_1__1> WRT_8043 ****END LOAD SESSION*****
    MANAGER> PETL_24031
    ***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
    Thread [READER_1_1_1] created for [the read stage] of partition point [mplt_BC_ORA_JobDimension.Sq_Jobs] has completed. The total run time was insufficient for any meaningful statistics.
    Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [mplt_BC_ORA_JobDimension.Sq_Jobs] has completed. The total run time was insufficient for any meaningful statistics.
    Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_JOB_DS] has completed. The total run time was insufficient for any meaningful statistics.
    MANAGER> PETL_24005 Starting post-session tasks. : (Fri Sep 26 10:53:06 2008)
    MANAGER> PETL_24029 Post-session task completed successfully. : (Fri Sep 26 10:53:06 2008)
    MAPPING> TM_6018 Session [SDE_ORA_JobDimension_Full] run completed with [0] row transformation errors.
    MANAGER> PETL_24002 Parallel Pipeline Engine finished.
    DIRECTOR> PETL_24013 Session run completed with failure.
    DIRECTOR> TM_6022
    SESSION LOAD SUMMARY
    ================================================
    DIRECTOR> TM_6252 Source Load Summary.
    DIRECTOR> CMN_1740 Table: [Sq_Jobs] (Instance Name: [mplt_BC_ORA_JobDimension.Sq_Jobs])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6253 Target Load Summary.
    DIRECTOR> CMN_1740 Table: [W_JOB_DS] (Instance Name: [W_JOB_DS])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6023
    ===================================================
    DIRECTOR> TM_6020 Session [SDE_ORA_JobDimension_Full] completed at [Fri Sep 26 10:53:07 2008]

    To make use of the warehouse you would probably want to connect to an EBS instance in order to populate the warehouse.
    Since the execution plan you intend to run is designed for the EBS data-model. I guess if you really didn't want to connect to the EBS instance to pull data you could build one using the universal adapter. This allows you to load out of flat-files if you wish, but I wouldn't reccomend making this a habit for actual implementation as it does create another potential point of failure (populating the flat-files).
    Thanks,
    Austin

Maybe you are looking for

  • DVD from hard drive

    How can I play DVD that I rip with DVD Shrink and that now I have on an external hard drive on my macbook Please advise Thanks in advance

  • Best way formating hdd with exFat on macos or windows

    i have new WD 2TB external hdd and i want to use it on macos and windows. i formated smaller hdd with exfat and i wrote some files in mac, after restart i opened windows i couldn't see them and i back macos but files wasnt there... maybe it is about

  • SIT Real-time data logging and passing data

    Hello all, I am pretty new to LabView real-time, so I just want to confirm a few things. I am also using the Simulation Interface Toolkit. Let's take the Sinewave.vi from the SIT examples. I have modified it to use sinewave.dll, and run on PXI-8101.

  • Replication from Database Views

    Hi, We have a requirement to replicate data from a Database View in ECC to HANA.Is there any SLT DMIS version that supports data replication from Views? Replication should work from: ECC-->SLT-->HANA

  • How to protect an area in a chroma key?

    Hi Folks, I'm having a bit of a problem performing a key on a section of footage. There's an article in the foreground that unfortunately is the color of the greenscreen (bluescreen actually). Is there any way to 'protect' that area so that the chrom