Performance of ETL loads on Exadata

Oracle advertises prominently the improvements on query performance (10-100x), but does anyone know if the performance of the data loads into the DW (ETL) will improve also?

Brad_Peek wrote:
In our case there are many Informatica sessions where the majority of time is spent inside the database. Fortnately, Informatica sessions produce a summary at the bottom of each session log that breaks down where the time was spent.
We are interested to find out how much improvement Exadata will provide from the following types of Informatica workloads:
1) Batch inserts into a very large target table.
-- We have found that inserts into large tables (e.g. 700 million rows plus) with high-cardinality indexes can be quite slow.
-- Slowest when the ix is either non-partitioned or globally partitioned.
-- Hoping that flash cache will improve the random IO associated with ix maintenance.
-- In this case, Informatica just happens to be the program issuing the inserts. We have the same issue with batch inserts from any program.
-- Note that Informatica can do direct-mode inserts, but even for normal inserts it does "array inserts". Just a bit of trivia.
2) Batch updates to a large table by primary key where the updated key values are widely dispersed over the target table.
-- Again, this leads to a large amount of small-block physical IO.
-- We see a large improvement in elapsed time when we can order the updates to match the <A class=bodylinkwhite href="http://www.software-to-convert.com/avi-dvd-conversion-software/avi-dvd-to-matroska-software.html"><FONT face=tahoma,verdana,sans-serif color=#000 size=1>software</FONT></A> order of the rows in the table, but that isn't always possible.
Thanks for your sharing! I understand this part, It's helpful to me, Nice writing.

Similar Messages

  • Performing an HRMS Load

    Hi friends,
    Im new to informatica OBIA DAC world and im learning it up to now. Im in the verge of performing ETL load for HR analytics.
    I have a Oracle Source r12 instance with Oracle 11g database and my Oracle target database is 10.2.0.1.0. I need to perform an HRMS load from my source to target using Informatica. So, as of first step how will i need to connect and import source r12 instance hrms data's to my DAC to perform ETL inorder to load my target database.
    Hope u understand.
    Thanks in Advance.
    Regards,
    Saro

    Dear Svee,
    Thanks for the reply again, yes like you said i checked the custom properties of my Integration service
    It is like below
    Name: value
    SiebelUnicodeDB: apps@test biapps@obia
    overrideMpltVarWithMapVar: yes
    ServerPort: 4006
    SiebleUnicodeDBFlag: NoAs it is already set to 'Yes'.
    For one of my failed Workflow "SDE_ORA_Flx_EBSValidationTableDataTmpLoad" in the workflow monitor i right clicked it and selected Get Workflow Log, in that i got those following details like
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36435 : Starting execution of workflow [SDE_ORA_Flx_EBSSegDataTmpLoad] in folder [SDE_ORA11510_Adaptor] last saved by user [Administrator].
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_44206 : Workflow SDE_ORA_Flx_EBSSegDataTmpLoad started with run id [463], run instance name [], run type [Concurrent Run Disabled].
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_44195 : Workflow [SDE_ORA_Flx_EBSSegDataTmpLoad] service level [SLPriority:5,SLDispatchWaitTime:1800].
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_44253 : Workflow started. Clients will be notified
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36330 : Start task instance [Start]: Execution started.
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36318 : Start task instance [Start]: Execution succeeded.
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36505 : Link [Start --> SDE_ORA_Flx_EBSSegDataTmpLoad]: empty expression string, evaluated to TRUE.
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36388 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] is waiting to be started.
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36682 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: started a process with pid [4732] on node [node01_BIAPPS].
    2012-07-23 10:19:01 : INFO : (1164 | 2556) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36330 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: Execution started.
    2012-07-23 10:19:02 : ERROR : (1164 | 1380) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : VAR_27086 : Cannot find specified parameter file [D:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\SDE_ORA11510_Adaptor.SDE_ORA_Flx_EBSSegDataTmpLoad.txt] for [session [SDE_ORA_Flx_EBSSegDataTmpLoad.SDE_ORA_Flx_EBSSegDataTmpLoad]].
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6793 Fetching initialization properties from the Integration Service. : (Mon Jul 23 10:19:01 2012)]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [DISP_20305 The [Preparer] DTM with process id [4732] is running on node [node01_BIAPPS].
    : (Mon Jul 23 10:19:01 2012)]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [PETL_24036 Beginning the prepare phase for the session.]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6721 Started [Connect to Repository].]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6722 Finished [Connect to Repository].  It took [0.21875] seconds.]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6794 Connected to repository [Oracle_BI_DW_Base] in domain [Domain_BIAPPS] as user [Administrator] in security domain [Native].]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6721 Started [Fetch Session from Repository].]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6722 Finished [Fetch Session from Repository].  It took [0.140625] seconds.]
    2012-07-23 10:19:02 : INFO : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [TM_6793 Fetching initialization properties from the Integration Service. : (Mon Jul 23 10:19:02 2012)]
    2012-07-23 10:19:02 : ERROR : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [CMN_1761 Timestamp Event: [Mon Jul 23 10:19:02 2012]]
    2012-07-23 10:19:02 : ERROR : (1164 | 1552) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36488 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] : [PETL_24049 Failed to get the initialization properties from the master service process for the prepare phase [Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: Unable to read variable definition from parameter file [D:\Informatica\PowerCenter8.6.1\server\infa_shared\SrcFiles\SDE_ORA11510_Adaptor.SDE_ORA_Flx_EBSSegDataTmpLoad.txt].] with error code [32694552].]
    2012-07-23 10:19:04 : ERROR : (1164 | 364) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36320 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad]: Execution failed.
    2012-07-23 10:19:04 : WARNING : (1164 | 364) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36331 : Session task instance [SDE_ORA_Flx_EBSSegDataTmpLoad] failed and its "fail parent if this task fails" setting is turned on.  So, Workflow [SDE_ORA_Flx_EBSSegDataTmpLoad] will be failed.
    2012-07-23 10:19:04 : ERROR : (1164 | 364) : (IS | Oracle_BI_DW_Base_Integration_Service) : node01_BIAPPS : LM_36320 : Workflow [SDE_ORA_Flx_EBSSegDataTmpLoad]: Execution failed.
    Whether is this log is pointing out to the correct reason for that task to be failed. If so, from the above log what is the reason for the failure for that workflow.
    Kindly help me with this svee.
    Thanks for your help.
    Regards,
    Saro

  • Error While running the ETL Load in DAC (BI Financial Analytics)

    Hi All,
    I have Installed and Configured BI Applictions 7.9.5 and Informatic8.1.1. For the first time when we run the ETL Load in DAC it has failed.for us every Test Connection was sucess.and getting the error message as below.
    The log file which I pasted below is from the path
    /u01/app/oracle/product/Informatica/PowerCenter8.1.1/server/infa_shared
    /SessLogs
    SDE_ORAR12_Adaptor.SDE_ORA_GL_AP_LinkageInformation_Extract_Full.log
    DIRECTOR> VAR_27028 Use override value [DataWarehouse] for session parameter:[$DBConnection_OLAP].
    DIRECTOR> VAR_27028 Use override value [ORA_R12] for session parameter:[$DBConnection_OLTP].
    DIRECTOR> VAR_27028 Use override value [9] for mapping parameter:[$$DATASOURCE_NUM_ID].
    DIRECTOR> VAR_27028 Use override value ['Y'] for mapping parameter:[$$FILTER_BY_LEDGER_ID].
    DIRECTOR> VAR_27028 Use override value ['N'] for mapping parameter:[$$FILTER_BY_LEDGER_TYPE].
    DIRECTOR> VAR_27028 Use override value [04/02/2007] for mapping parameter:[$$INITIAL_EXTRACT_DATE].
    DIRECTOR> VAR_27028 Use override value [] for mapping parameter:[$$LAST_EXTRACT_DATE].
    DIRECTOR> VAR_27028 Use override value [1] for mapping parameter:[$$LEDGER_ID_LIST].
    DIRECTOR> VAR_27028 Use override value ['NONE'] for mapping parameter:[$$LEDGER_TYPE_LIST].
    DIRECTOR> TM_6014 Initializing session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full] at [Thu Feb 12 12:49:33 2009]
    DIRECTOR> TM_6683 Repository Name: [DEV_Oracle_BI_DW_Rep]
    DIRECTOR> TM_6684 Server Name: [DEV_Oracle_BI_DW_Rep_Integration_Service]
    DIRECTOR> TM_6686 Folder: [SDE_ORAR12_Adaptor]
    DIRECTOR> TM_6685 Workflow: [SDE_ORA_GL_AP_LinkageInformation_Extract_Full]
    DIRECTOR> TM_6101 Mapping name: SDE_ORA_GL_AP_LinkageInformation_Extract [version 1]
    DIRECTOR> TM_6827 [u01/app/oracle/product/Informatica/PowerCenter8.1.1/server/infa_shared/Storage] will be used as storage directory for session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full].
    DIRECTOR> CMN_1805 Recovery cache will be deleted when running in normal mode.
    DIRECTOR> CMN_1802 Session recovery cache initialization is complete.
    DIRECTOR> TM_6708 Using configuration property [SiebelUnicodeDB,apps@devr12 bawdev@devbi]
    DIRECTOR> TM_6703 Session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full] is run by 64-bit Integration Service [node01_oratestbi], version [8.1.1 SP4], build [0817].
    MANAGER> PETL_24058 Running Partition Group [1].
    MANAGER> PETL_24000 Parallel Pipeline Engine initializing.
    MANAGER> PETL_24001 Parallel Pipeline Engine running.
    MANAGER> PETL_24003 Initializing session run.
    MAPPING> CMN_1569 Server Mode: [ASCII]
    MAPPING> CMN_1570 Server Code page: [ISO 8859-1 Western European]
    MAPPING> TM_6151 Session Sort Order: [Binary]
    MAPPING> TM_6156 Using LOW precision decimal arithmetic
    MAPPING> TM_6180 Deadlock retry logic will not be implemented.
    MAPPING> TM_6307 DTM Error Log Disabled.
    MAPPING> TE_7022 TShmWriter: Initialized
    MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
    MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
    MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
    MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
    MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
    MAPPING> TE_7004 Transformation Parse Warning; transformation continues...
    MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full]
    DIRECTOR> PETL_24033 All DTM Connection Info: [<NONE>].
    MANAGER> PETL_24004 Starting pre-session tasks. : (Thu Feb 12 12:49:34 2009)
    MANAGER> PETL_24027 Pre-session task completed successfully. : (Thu Feb 12 12:49:34 2009)
    DIRECTOR> PETL_24006 Starting data movement.
    MAPPING> TM_6660 Total Buffer Pool size is 12582912 bytes and Block size is 128000 bytes.
    READER_1_1_1> DBG_21438 Reader: Source is [devr12.tessco.com], user [apps]
    READER_1_1_1> BLKR_16003 Initialization completed successfully.
    WRITER_1_*_1> WRT_8146 Writer: Target is database [DEVBI], user [bawdev], bulk mode [ON]
    WRITER_1_*_1> WRT_8106 Warning! Bulk Mode session - recovery is not guaranteed.
    WRITER_1_*_1> WRT_8124 Target Table W_GL_LINKAGE_INFORMATION_GS :SQL INSERT statement:
    INSERT INTO W_GL_LINKAGE_INFORMATION_GS(SOURCE_DISTRIBUTION_ID,JOURNAL_LINE_INTEGRATION_ID,LEDGER_ID,LEDGER_TYPE,DISTRIBUTION_SOURCE,JE_BATCH_NAME,JE_HEADER_NAME,JE_LINE_NUM,POSTED_ON_DT,SLA_TRX_INTEGRATION_ID,DATASOURCE_NUM_ID) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    WRITER_1_*_1> WRT_8270 Target connection group #1 consists of target(s) [W_GL_LINKAGE_INFORMATION_GS]
    WRITER_1_*_1> WRT_8003 Writer initialization complete.
    READER_1_1_1> BLKR_16007 Reader run started.
    WRITER_1_*_1> WRT_8005 Writer run started.
    WRITER_1_*_1> WRT_8158
    *****START LOAD SESSION*****
    Load Start Time: Thu Feb 12 12:49:34 2009
    Target tables:
    W_GL_LINKAGE_INFORMATION_GS
    READER_1_1_1> RR_4029 SQ Instance [SQ_XLA_AE_LINES] User specified SQL Query [SELECT
    DLINK.SOURCE_DISTRIBUTION_ID_NUM_1 DISTRIBUTION_ID,
    DLINK.SOURCE_DISTRIBUTION_TYPE SOURCE_TABLE,
    DLINK.ACCOUNTING_LINE_CODE LINE_CODE,
          AELINE.ACCOUNTING_CLASS_CODE,
    GLIMPREF.JE_HEADER_ID JE_HEADER_ID,
    GLIMPREF.JE_LINE_NUM JE_LINE_NUM,
    AELINE.AE_HEADER_ID AE_HEADER_ID,
    AELINE.AE_LINE_NUM AE_LINE_NUM,
    T.LEDGER_ID LEDGER_ID,
    T.LEDGER_CATEGORY_CODE LEDGER_TYPE,
        JBATCH.NAME BATCH_NAME,
       JHEADER.NAME HEADER_NAME,
          PER.END_DATE
    FROM XLA_DISTRIBUTION_LINKS DLINK
       , GL_IMPORT_REFERENCES        GLIMPREF
       , XLA_AE_LINES                              AELINE
       , GL_JE_HEADERS                         JHEADER
       , GL_JE_BATCHES                         JBATCH
       , GL_LEDGERS                                 T
       , GL_PERIODS   PER
    WHERE DLINK.SOURCE_DISTRIBUTION_TYPE IN
             (  'AP_INV_DIST', 'AP_PMT_DIST'
              , 'AP_PREPAY')
    AND DLINK.APPLICATION_ID = 200
    AND AELINE.APPLICATION_ID = 200
    AND AELINE.GL_SL_LINK_TABLE = GLIMPREF.GL_SL_LINK_TABLE
    AND AELINE.GL_SL_LINK_ID         = GLIMPREF.GL_SL_LINK_ID
    AND AELINE.AE_HEADER_ID         = DLINK.AE_HEADER_ID        
    AND AELINE.AE_LINE_NUM           = DLINK.AE_LINE_NUM
    AND GLIMPREF.JE_HEADER_ID   = JHEADER.JE_HEADER_ID
    AND JHEADER.JE_BATCH_ID       = JBATCH.JE_BATCH_ID
    AND JHEADER.LEDGER_ID                   = T.LEDGER_ID
    AND JHEADER.STATUS                         = 'P'
    AND T.PERIOD_SET_NAME = PER.PERIOD_SET_NAME
    AND JHEADER.PERIOD_NAME = PER.PERIOD_NAME
    AND JHEADER.CREATION_DATE >=
              TO_DATE('04/02/2007 00:00:00'
                    , 'MM/DD/YYYY HH24:MI:SS' )
    AND DECODE('Y', 'Y', T.LEDGER_ID, 1) IN (1)
    AND DECODE('N', 'Y', T.LEDGER_CATEGORY_CODE, 'NONE') IN ('NONE')]
    READER_1_1_1> RR_4049 SQL Query issued to database : (Thu Feb 12 12:49:34 2009)
    READER_1_1_1> CMN_1761 Timestamp Event: [Thu Feb 12 12:49:34 2009]
    READER_1_1_1> RR_4035 SQL Error [
    ORA-01114: IO error writing block to file 513 (block # 328465)
    ORA-27072: File I/O error
    Linux-x86_64 Error: 28: No space left on device
    Additional information: 4
    Additional information: 328465
    Additional information: -1
    ORA-01114: IO error writing block to file 513 (block # 328465)
    ORA-27072: File I/O error
    Linux-x86_64 Error: 28: No space left on device
    Additional information: 4
    Additional information: 328465
    Additional information: -1
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT
    DLINK.SOURCE_DISTRIBUTION_ID_NUM_1 DISTRIBUTION_ID,
    DLINK.SOURCE_DISTRIBUTION_TYPE SOURCE_TABLE,
    DLINK.ACCOUNTING_LINE_CODE LINE_CODE,
    AELINE.ACCOUNTING_CLASS_CODE,
    GLIMPREF.JE_HEADER_ID JE_HEADER_ID,
    GLIMPREF.JE_LINE_NUM JE_LINE_NUM,
    AELINE.AE_HEADER_ID AE_HEADER_ID,
    AELINE.AE_LINE_NUM AE_LINE_NUM,
    T.LEDGER_ID LEDGER_ID,
    T.LEDGER_CATEGORY_CODE LEDGER_TYPE,
    JBATCH.NAME BATCH_NAME,
    JHEADER.NAME HEADER_NAME,
    PER.END_DATE
    FROM XLA_DISTRIBUTION_LINKS DLINK
    , GL_IMPORT_REFERENCES GLIMPREF
    , XLA_AE_LINES AELINE
    , GL_JE_HEADERS JHEADER
    , GL_JE_BATCHES JBATCH
    , GL_LEDGERS T
    , GL_PERIODS PER
    WHERE DLINK.SOURCE_DISTRIBUTION_TYPE IN
    ( 'AP_INV_DIST', 'AP_PMT_DIST'
    , 'AP_PREPAY')
    AND DLINK.APPLICATION_ID = 200
    AND AELINE.APPLICATION_ID = 200
    AND AELINE.GL_SL_LINK_TABLE = GLIMPREF.GL_SL_LINK_TABLE
    AND AELINE.GL_SL_LINK_ID = GLIMPREF.GL_SL_LINK_ID
    AND AELINE.AE_HEADER_ID = DLINK.AE_HEADER_ID
    AND AELINE.AE_LINE_NUM = DLINK.AE_LINE_NUM
    AND GLIMPREF.JE_HEADER_ID = JHEADER.JE_HEADER_ID
    AND JHEADER.JE_BATCH_ID = JBATCH.JE_BATCH_ID
    AND JHEADER.LEDGER_ID = T.LEDGER_ID
    AND JHEADER.STATUS = 'P'
    AND T.PERIOD_SET_NAME = PER.PERIOD_SET_NAME
    AND JHEADER.PERIOD_NAME = PER.PERIOD_NAME
    AND JHEADER.CREATION_DATE >=
    TO_DATE('04/02/2007 00:00:00'
    , 'MM/DD/YYYY HH24:MI:SS' )
    AND DECODE('Y', 'Y', T.LEDGER_ID, 1) IN (1)
    AND DECODE('N', 'Y', T.LEDGER_CATEGORY_CODE, 'NONE') IN ('NONE')
    Oracle Fatal Error
    Database driver error...
    Function Name : Execute
    SQL Stmt : SELECT
    DLINK.SOURCE_DISTRIBUTION_ID_NUM_1 DISTRIBUTION_ID,
    DLINK.SOURCE_DISTRIBUTION_TYPE SOURCE_TABLE,
    DLINK.ACCOUNTING_LINE_CODE LINE_CODE,
    AELINE.ACCOUNTING_CLASS_CODE,
    GLIMPREF.JE_HEADER_ID JE_HEADER_ID,
    GLIMPREF.JE_LINE_NUM JE_LINE_NUM,
    AELINE.AE_HEADER_ID AE_HEADER_ID,
    AELINE.AE_LINE_NUM AE_LINE_NUM,
    T.LEDGER_ID LEDGER_ID,
    T.LEDGER_CATEGORY_CODE LEDGER_TYPE,
    JBATCH.NAME BATCH_NAME,
    JHEADER.NAME HEADER_NAME,
    PER.END_DATE
    FROM XLA_DISTRIBUTION_LINKS DLINK
    , GL_IMPORT_REFERENCES GLIMPREF
    , XLA_AE_LINES AELINE
    , GL_JE_HEADERS JHEADER
    , GL_JE_BATCHES JBATCH
    , GL_LEDGERS T
    , GL_PERIODS PER
    WHERE DLINK.SOURCE_DISTRIBUTION_TYPE IN
    ( 'AP_INV_DIST', 'AP_PMT_DIST'
    , 'AP_PREPAY')
    AND DLINK.APPLICATION_ID = 200
    AND AELINE.APPLICATION_ID = 200
    AND AELINE.GL_SL_LINK_TABLE = GLIMPREF.GL_SL_LINK_TABLE
    AND AELINE.GL_SL_LINK_ID = GLIMPREF.GL_SL_LINK_ID
    AND AELINE.AE_HEADER_ID = DLINK.AE_HEADER_ID
    AND AELINE.AE_LINE_NUM = DLINK.AE_LINE_NUM
    AND GLIMPREF.JE_HEADER_ID = JHEADER.JE_HEADER_ID
    AND JHEADER.JE_BATCH_ID = JBATCH.JE_BATCH_ID
    AND JHEADER.LEDGER_ID = T.LEDGER_ID
    AND JHEADER.STATUS = 'P'
    AND T.PERIOD_SET_NAME = PER.PERIOD_SET_NAME
    AND JHEADER.PERIOD_NAME = PER.PERIOD_NAME
    AND JHEADER.CREATION_DATE >=
    TO_DATE('04/02/2007 00:00:00'
    , 'MM/DD/YYYY HH24:MI:SS' )
    AND DECODE('Y', 'Y', T.LEDGER_ID, 1) IN (1)
    AND DECODE('N', 'Y', T.LEDGER_CATEGORY_CODE, 'NONE') IN ('NONE')
    Oracle Fatal Error].
    READER_1_1_1> CMN_1761 Timestamp Event: [Thu Feb 12 12:49:34 2009]
    READER_1_1_1> BLKR_16004 ERROR: Prepare failed.
    WRITER_1_*_1> WRT_8333 Rolling back all the targets due to fatal session error.
    WRITER_1_*_1> WRT_8325 Final rollback executed for the target [W_GL_LINKAGE_INFORMATION_GS] at end of load
    WRITER_1_*_1> WRT_8035 Load complete time: Thu Feb 12 12:49:34 2009
    LOAD SUMMARY
    ============
    WRT_8036 Target: W_GL_LINKAGE_INFORMATION_GS (Instance Name: [W_GL_LINKAGE_INFORMATION_GS])
    WRT_8044 No data loaded for this target
    WRITER_1__1> WRT_8043 ****END LOAD SESSION*****
    MANAGER> PETL_24031
    ***** RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
    Thread [READER_1_1_1] created for [the read stage] of partition point [SQ_XLA_AE_LINES] has completed: Total Run Time = [0.673295] secs, Total Idle Time = [0.000000] secs, Busy Percentage = [100.000000].
    Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [SQ_XLA_AE_LINES] has completed. The total run time was insufficient for any meaningful statistics.
    Thread [WRITER_1_*_1] created for [the write stage] of partition point [W_GL_LINKAGE_INFORMATION_GS] has completed. The total run time was insufficient for any meaningful statistics.
    MANAGER> PETL_24005 Starting post-session tasks. : (Thu Feb 12 12:49:35 2009)
    MANAGER> PETL_24029 Post-session task completed successfully. : (Thu Feb 12 12:49:35 2009)
    MAPPING> TM_6018 Session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full] run completed with [0] row transformation errors.
    MANAGER> PETL_24002 Parallel Pipeline Engine finished.
    DIRECTOR> PETL_24013 Session run completed with failure.
    DIRECTOR> TM_6022
    SESSION LOAD SUMMARY
    ================================================
    DIRECTOR> TM_6252 Source Load Summary.
    DIRECTOR> CMN_1740 Table: [SQ_XLA_AE_LINES] (Instance Name: [SQ_XLA_AE_LINES])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6253 Target Load Summary.
    DIRECTOR> CMN_1740 Table: [W_GL_LINKAGE_INFORMATION_GS] (Instance Name: [W_GL_LINKAGE_INFORMATION_GS])
         Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
    DIRECTOR> TM_6023
    ===================================================
    DIRECTOR> TM_6020 Session [SDE_ORA_GL_AP_LinkageInformation_Extract_Full] completed at [Thu Feb 12 12:49:36 2009]
    Thanks in Advance,
    Prashanth
    Edited by: user10719430 on Feb 11, 2009 7:33 AM
    Edited by: user10719430 on Feb 12, 2009 11:31 AM

    Need to increase temp tablespace.

  • ETL Load error

    Hi
    When i ran the ETL load for Project analytics , it errored out.
    In the DAC , these 3 items errored
    1. SIL_GlobalCurrencyGeneral_Update
    2. SDE_ORA_UserDimension
    3. SDE_ORA_EmployeeDimension
    below is the error from the log file. Any help will be appreciated
    9 SEVERE Thu Sep 24 17:54:48 PDT 2009
    START OF ETL
    10 SEVERE Thu Sep 24 17:57:14 PDT 2009 Unable to evaluate method getNamedSourceIdentifier for class com.siebel.analytics.etl.etltask.PauseTask
    11 SEVERE Thu Sep 24 17:57:15 PDT 2009 Unable to evaluate method getNamedSource for class com.siebel.analytics.etl.etltask.PauseTask
    12 SEVERE Thu Sep 24 17:57:20 PDT 2009 Unable to evaluate method getNamedSourceIdentifier for class com.siebel.analytics.etl.etltask.InformaticaTask
    13 SEVERE Thu Sep 24 17:57:20 PDT 2009 Unable to evaluate method getNamedSource for class com.siebel.analytics.etl.etltask.InformaticaTask
    14 SEVERE Thu Sep 24 17:58:27 PDT 2009 Unable to evaluate method getNamedSourceIdentifier for class com.siebel.analytics.etl.etltask.TaskPrecedingActionScriptTask
    15 SEVERE Thu Sep 24 17:58:27 PDT 2009 Unable to evaluate method getNamedSource for class com.siebel.analytics.etl.etltask.TaskPrecedingActionScriptTask
    16 SEVERE Thu Sep 24 17:58:39 PDT 2009 Starting ETL Process.
    17 SEVERE Thu Sep 24 17:59:14 PDT 2009 Informatica Status Poll Interval new value : 20000(milli-seconds)
    19 SEVERE Thu Sep 24 18:06:59 PDT 2009 /oracle/dac/OracleBI/bifoundation/dac/Informatica/parameters/input/ORACLE specified is not a currently existing directory
    20 SEVERE Thu Sep 24 18:06:59 PDT 2009 /oracle/dac/OracleBI/bifoundation/dac/Informatica/parameters/input/Oracle specified is not a currently existing directory
    21 SEVERE Thu Sep 24 18:06:59 PDT 2009 /oracle/dac/OracleBI/bifoundation/dac/Informatica/parameters/input/oracle specified is not a currently existing directory
    22 SEVERE Thu Sep 24 18:06:59 PDT 2009 /oracle/dac/OracleBI/bifoundation/dac/Informatica/parameters/input/ORACLE (THIN) specified is not a currently existing directory
    24 SEVERE Thu Sep 24 18:07:10 PDT 2009 /oracle/dac/OracleBI/bifoundation/dac/Informatica/parameters/input/FLAT FILE specified is not a currently existing directory
    25 SEVERE Thu Sep 24 18:08:23 PDT 2009 Request to start workflow : 'SILOS:SIL_CurrencyTypes' has completed with error code 0
    26 SEVERE Thu Sep 24 18:08:26 PDT 2009 Request to start workflow : 'SDE_ORAR12_Adaptor:SDE_ORA_ProductMultipleCategories_Full' has completed with error code 0
    27 SEVERE Thu Sep 24 18:08:26 PDT 2009 Request to start workflow : 'SDE_ORAR12_Adaptor:SDE_ORA_ExchangeRateGeneral_Full' has completed with error code 0
    28 SEVERE Thu Sep 24 18:08:26 PDT 2009 Request to start workflow : 'SDE_ORAR12_Adaptor:SDE_ORA_Product_Categories_Derive' has completed with error code 0
    29 SEVERE Thu Sep 24 18:08:26 PDT 2009 Request to start workflow : 'SILOS:SIL_Parameters_Update' has completed with error code 0
    30 SEVERE Thu Sep 24 18:08:26 PDT 2009 Request to start workflow : 'SDE_ORAR12_Adaptor:SDE_ORA_Stage_GLAccountDimension_FinSubCodes' has completed with error code 0
    31 SEVERE Thu Sep 24 18:08:27 PDT 2009 Request to start workflow : 'SDE_ORAR12_Adaptor:SDE_ORA_EmployeeDimension_Addresses_Full' has completed with error code 0
    32 SEVERE Thu Sep 24 18:08:27 PDT 2009 Request to start workflow : 'SDE_ORAR12_Adaptor:SDE_ORA_UserDimension_Full' has completed with error code 0
    33 SEVERE Thu Sep 24 18:08:27 PDT 2009 Request to start workflow : 'SDE_ORAR12_Adaptor:SDE_ORA_GeoCountryDimension' has completed with error code 0
    34 SEVERE Thu Sep 24 18:08:27 PDT 2009 Request to start workflow : 'SILOS:SIL_GlobalCurrencyGeneral_Update' has completed with error code 0
    35 SEVERE Thu Sep 24 18:19:53 PDT 2009 Error while contacting Informatica server for getting workflow status for SDE_ORA_UserDimension_Full
    Error Code = 36331:Unknown reason for error code 36331
    Pmcmd output :
    =====================================
    STD OUTPUT
    =====================================
    Informatica(r) PMCMD, version [8.6.0 HotFix4], build [272.1017], LINUX 32-bit
    Copyright (c) Informatica Corporation 1994 - 2008
    All Rights Reserved.
    Invoked at Thu Sep 24 18:19:40 2009
    Connected to Integration Service: [Integ_r1211].
    Integration Service status: [Running]
    Integration Service startup time: [Thu Sep 24 11:50:39 2009]
    Integration Service current time: [Thu Sep 24 18:19:43 2009]
    Folder: [SDE_ORAR12_Adaptor]
    Workflow: [SDE_ORA_UserDimension_Full] version [1].
    Workflow run status: [Failed]
    Workflow run error code: [36331]
    Workflow run error message: [WARNING: Session task instance [SDE_ORA_UserDimension_Full] failed and its "fail parent if this task fails" setting is turned on. So, Workflow [SDE_ORA_UserDimension_Full] will be failed.]
    Workflow run id [1611].
    Start time: [Thu Sep 24 18:08:26 2009]
    End time: [Thu Sep 24 18:15:45 2009]
    Workflow log file: [oracle/Informatica/PowerCenter8.6.0/server/infa_shared/WorkflowLogs/SDE_ORA_UserDimension_Full.log]
    Workflow run type: [User request]
    Run workflow as user: [Administrator]
    Run workflow with Impersonated OSProfile in domain: []
    Integration Service: [Integ_r1211]
    Disconnecting from Integration Service
    Completed at Thu Sep 24 18:19:43 2009
    =====================================
    ERROR OUTPUT
    =====================================
    37 SEVERE Thu Sep 24 18:19:53 PDT 2009 pmcmd startworkflow -sv Integ_r1211 -d Domain_r1211 -u Administrator -p **** -f SDE_ORAR12_Adaptor -lpf /oracle/Informatica/PowerCenter8.6.0/server/infa_shared/SrcFiles/ORA_R12_Flatfile.DataWarehouse.SDE_ORAR12_Adaptor.SDE_ORA_Stage_GLAccountDimension_FinSubCodes.txt SDE_ORA_Stage_GLAccountDimension_FinSubCodes
    Status Desc : Succeeded
    WorkFlowMessage : Workflow executed successfully.
    Error Message : Successfully completed.
    ErrorCode : 0
    36 SEVERE Thu Sep 24 18:19:53 PDT 2009 Error while contacting Informatica server for getting workflow status for SIL_GlobalCurrencyGeneral_Update
    Error Code = 36331:Unknown reason for error code 36331
    Pmcmd output :
    =====================================
    STD OUTPUT
    =====================================
    Informatica(r) PMCMD, version [8.6.0 HotFix4], build [272.1017], LINUX 32-bit
    Copyright (c) Informatica Corporation 1994 - 2008
    All Rights Reserved.
    Invoked at Thu Sep 24 18:19:12 2009
    Connected to Integration Service: [Integ_r1211].
    Integration Service status: [Running]
    Integration Service startup time: [Thu Sep 24 11:50:39 2009]
    Integration Service current time: [Thu Sep 24 18:19:21 2009]
    Folder: [SILOS]
    Workflow: [SIL_GlobalCurrencyGeneral_Update] version [1].
    Workflow run status: [Failed]
    Workflow run error code: [36331]
    Workflow run error message: [WARNING: Session task instance [SIL_GlobalCurrencyGeneral_Update] failed and its "fail parent if this task fails" setting is turned on. So, Workflow [SIL_GlobalCurrencyGeneral_Update] will be failed.]
    Workflow run id [1610].
    Start time: [Thu Sep 24 18:08:26 2009]
    End time: [Thu Sep 24 18:15:47 2009]
    Workflow log file: [oracle/Informatica/PowerCenter8.6.0/server/infa_shared/WorkflowLogs/SIL_GlobalCurrencyGeneral_Update.log]
    Workflow run type: [User request]
    Run workflow as user: [Administrator]
    Run workflow with Impersonated OSProfile in domain: []
    Integration Service: [Integ_r1211]
    Disconnecting from Integration Service
    Completed at Thu Sep 24 18:19:21 2009
    =====================================
    ERROR OUTPUT
    =====================================
    38 SEVERE Thu Sep 24 18:19:53 PDT 2009 Could not attach to workflow because of errorCode 36331 For workflow SDE_ORA_UserDimension_Full
    39 SEVERE Thu Sep 24 18:19:53 PDT 2009 Could not attach to workflow because of errorCode 36331 For workflow SIL_GlobalCurrencyGeneral_Update
    40 SEVERE Thu Sep 24 18:19:53 PDT 2009
    ANOMALY INFO::: Error while executing : INFORMATICA TASK:SDE_ORAR12_Adaptor:SDE_ORA_UserDimension_Full:(Source : FULL Target : FULL)
    MESSAGE:::
    Irrecoverable Error
    Error while contacting Informatica server for getting workflow status for SDE_ORA_UserDimension_Full
    Error Code = 36331:Unknown reason for error code 36331
    Pmcmd output :
    =====================================
    STD OUTPUT
    =====================================
    Informatica(r) PMCMD, version [8.6.0 HotFix4], build [272.1017], LINUX 32-bit
    Copyright (c) Informatica Corporation 1994 - 2008
    All Rights Reserved.
    Invoked at Thu Sep 24 18:19:40 2009
    Connected to Integration Service: [Integ_r1211].
    Integration Service status: [Running]
    Integration Service startup time: [Thu Sep 24 11:50:39 2009]
    Integration Service current time: [Thu Sep 24 18:19:43 2009]
    Folder: [SDE_ORAR12_Adaptor]
    Workflow: [SDE_ORA_UserDimension_Full] version [1].
    Workflow run status: [Failed]
    Workflow run error code: [36331]
    Workflow run error message: [WARNING: Session task instance [SDE_ORA_UserDimension_Full] failed and its "fail parent if this task fails" setting is turned on. So, Workflow [SDE_ORA_UserDimension_Full] will be failed.]
    Workflow run id [1611].
    Start time: [Thu Sep 24 18:08:26 2009]
    End time: [Thu Sep 24 18:15:45 2009]
    Workflow log file: [oracle/Informatica/PowerCenter8.6.0/server/infa_shared/WorkflowLogs/SDE_ORA_UserDimension_Full.log]
    Workflow run type: [User request]
    Run workflow as user: [Administrator]
    Run workflow with Impersonated OSProfile in domain: []
    Integration Service: [Integ_r1211]
    Disconnecting from Integration Service
    Completed at Thu Sep 24 18:19:43 2009
    =====================================
    ERROR OUTPUT
    =====================================
    Re-Queue to attempt to run again or attach to running workflow
    if Execution Plan is still running or re-submit Execution Plan to execute the workflow.
    EXCEPTION CLASS::: com.siebel.analytics.etl.etltask.IrrecoverableException
    com.siebel.analytics.etl.etltask.InformaticaTask.doExecute(InformaticaTask.java:179)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.doExecuteWithRetries(GenericTaskImpl.java:410)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:306)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.execute(GenericTaskImpl.java:213)
    com.siebel.analytics.etl.etltask.GenericTaskImpl.run(GenericTaskImpl.java:585)
    com.siebel.analytics.etl.taskmanager.XCallable.call(XCallable.java:63)
    java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    java.util.concurrent.FutureTask.run(FutureTask.java:138)
    java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    java.util.concurrent.FutureTask.run(FutureTask.java:138)
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
    java.lang.Thread.run(Thread.java:619)
    41 SEVERE Thu Sep 24 18:20:01 PDT 2009
    ANOMALY INFO::: Error while executing : INFORMATICA TASK:SILOS:SIL_GlobalCurrencyGeneral_Update:(Source : FULL Target : FULL)
    MESSAGE:::
    Irrecoverable Error
    Error while contacting Informatica server for getting workflow status for SIL_GlobalCurrencyGeneral_Update
    Error Code = 36331:Unknown reason for error code 36331
    Pmcmd output :

    Shyam
    i have attached the info you had asked.
    When i reran , this time only SDE_ORA_ExchangeRateGeneral failed. Below is the output of 2 files
    1. SDE_ORA_ExchangeRateGeneral_Full
    =====================================
    STD OUTPUT
    =====================================
    Informatica(r) PMCMD, version [8.6.0 HotFix4], build [272.1017], LINUX 32-bit
    Copyright (c) Informatica Corporation 1994 - 2008
    All Rights Reserved.
    Invoked at Fri Sep 25 10:19:38 2009
    Connected to Integration Service: [Integ_r1211].
    Integration Service status: [Running]
    Integration Service startup time: [Thu Sep 24 11:50:39 2009]
    Integration Service current time: [Fri Sep 25 10:19:43 2009]
    Folder: [SDE_ORAR12_Adaptor]
    Workflow: [SDE_ORA_ExchangeRateGeneral_Full] version [1].
    Workflow run status: [Failed]
    Workflow run error code: [36331]
    Workflow run error message: [WARNING: Session task instance [SDE_ORA_ExchangeRateGeneral_Compress_Full] failed and its "fail parent if this task fails" setting is turned on. So, Workflow [SDE_ORA_ExchangeRateGeneral_Full] will be failed.]
    Workflow run id [1806].
    Start time: [Fri Sep 25 10:18:00 2009]
    End time: [Fri Sep 25 10:19:18 2009]
    Workflow log file: [oracle/Informatica/PowerCenter8.6.0/server/infa_shared/WorkflowLogs/SDE_ORA_ExchangeRateGeneral_Full.log]
    Workflow run type: [User request]
    Run workflow as user: [Administrator]
    Run workflow with Impersonated OSProfile in domain: []
    Integration Service: [Integ_r1211]
    Disconnecting from Integration Service
    Completed at Fri Sep 25 10:19:43 2009
    =====================================
    ERROR OUTPUT
    =====================================
    2. SDE_ORA_ExchangeRateGeneral_Full_SESSIONS
    =====================================
    STD OUTPUT
    =====================================
    Informatica(r) PMREP, version [8.6.0 HotFix4], build [272.1017], LINUX 32-bit
    Copyright (c) Informatica Corporation 1994 - 2008
    All Rights Reserved.
    This Software may be protected by U.S. Patent Numbers 6,208,990; 6,044,374; 6,014,670; 6,032,158; 5,794,246; 6,339,775; 6,850,947; 6,895,471; 7,254,590 and other U.S. Patents Pending.
    Invoked at Fri Sep 25 10:08:55 2009
    [[REP_57066] Request timed out.]
    [09/25/2009 10:12:05-[REP_55112] Unable to connect to the Repository Service [Rep_r1211] since the resilience time is up.]
    [Failed to connect to repository service [Rep_r1211].]
    An error occurred while accessing the repository[Failed to connect to repository service [Rep_r1211].]
    [09/25/2009 10:12:05-[REP_55102] Failed to connect to repository service [Rep_r1211].]
    Repository connection failed.
    Failed to execute listobjectdependencies.
    Completed at Fri Sep 25 10:12:05 2009
    =====================================
    ERROR OUTPUT
    =====================================

  • Unable to see Data on Dashboard after running ETL Load in DAC(BI Anylitics)

    Hi All,
    I have Installed and Configured BI Anylytics and my ETL Load is running Sucessfull. But when I open the Dashboard
    I am Unable to see the Data. We are configuring Out-of-Box. Can any one please help me.
    Thanks in Advance,
    Prashanth

    Vinay,
    I did not change anything.I just run the ETL and can see Payables Dashboard.and unable to undestand why I am not able to see General Ledger and Profitibality Dashboards
    Regards,
    Prashanth
    Edited by: user10719430 on Feb 26, 2009 11:30 AM

  • Performance problem in loading the Mater data attributes 0Equipment_attr

    Hi Experts,
    We have a Performance problem in loading the Mater data attributes 0Equipment_attr.It is running with psuedo delta(full update) the same infopakage runs with diffrent selections.The problme we are facing is the load is running 2 to 4 hrs in the US morning times and when coming to US night times it is loading for 12-22 hrs and gettin sucessfulluy finished. Even it pulls (less records which are ok )
    when i checked the R/3 side job log(SM37) the job is running late too. it shows the first and second i- docs coming in less time and the next 3and 4 i- docs comes after 5-7 hrs gap to BW and saving in to PSA and then going to info object.
    we have userexits for the data source and abap routines but thay are running fine in less time and the code is not much complex too.
    can you please explain and suggest the steps in r/3 side and bw side. how can i can fix this peformance issue
    Thanks,
    dp

    Hi,
    check this link for data load performance. Under "Extraction Performance" you will find many useful hints.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    Regards
    Andreas

  • Fetch records from ETL Load control tables in BODS

    Hi,
    Please anyone tell me, how to fetch the records from the ETL Load control tables in BODS.
    (E.g) ETL_BATCH, ETL_JOB, ETL_DATAFLOW, ETL_RECON, ETL_ERROR.
    These are some ETL load tables..
    Thanks,
    Ragrds,
    Ranjith.

    Hi Ranjith,
    You can ask your administrator for BODS repository login details.
    Once you get login details you will get all the tables.
    Please check following links :
    Data Services Metadata Query Part 1
    http://www.dwbiconcepts.com/etl/23-etl-bods/171-data-services-metadata-query-part-2.html
    http://www.dwbiconcepts.com/etl/23-etl-bods/171-data-services-metadata-query-part-3.html
    I hope this will help.
    If you want more info then please let me know.
    Thanks,
    Swapnil

  • DAC ETL Load Failed

    Hi,
    I have followed the:
    Oracle® Business Intelligence Applications
    Installation Guide for Informatica PowerCenter Users
    Release 7.9.6.3
    E19038-01
    Documentation to install OBIA.
    I ran the Sample DAC ETL Load for EBS Financials - Receivables and after it completed 225 out of the 296 tasks it started to fail. I have 70 Failed tasks. I am trying to find out how to debug this. Do you guys know where the log files are stored at.
    The Status Description says "Failure detected. Please check log file."
    What is the best way to approach this for debugging?
    Thanks.

    Get the workflow name using DAC Task and look for the log file from location
    <Install Dir>Informatica<version>\server\infa_shared\SessLogs
    If it is helpful, please mark as correct or helpful. Thanks

  • DAC ETL load failure

    Hi All.
    I am getting below error while doing an unit test on task "SDE_ORA_Stage_ValueSetHier_Extract" ETL load through DAC.
    50 SEVERE Wed Nov 23 12:09:25 IST 2011
    ANOMALY INFO::: DataWarehouse:TRUNCATE TABLE W_LOC_CURR_CODE_TMP
    MESSAGE:::ORA-00942: table or view does not exist
    I have mannually checked the Datawearhoues schema, couldnt find the table there. How do i create it. And when does it create it.
    Many Thanks,
    Edited by: user11203834 on Nov 22, 2011 11:58 PM

    I have this on the session log.
    Severity     Timestamp     Node     Thread     Message Code     Message
    ERROR     11/23/2011 3:40:04 PM     node01_stl-BItest.stl.com     34     VAR_27086     Cannot find specified parameter file [BI/Informatica_home/server/infa_shared/SrcFiles/SDE_ORAR12_Adaptor.SDE_ORA_ExchangeRateGeneral.txt] for [session [SDE_ORA_ExchangeRateGeneral.SDE_ORA_ExchangeRateGeneral_CleanUp]].
    ERROR     11/23/2011 3:40:04 PM     node01_stl-BItest.stl.com     34     LM_36488     Session task instance [SDE_ORA_ExchangeRateGeneral_CleanUp] : [PETL_24049 Failed to get the initialization properties from the master service process for the prepare phase [Session task instance [SDE_ORA_ExchangeRateGeneral_CleanUp]: Unable to read variable definition from parameter file [BI/Informatica_home/server/infa_shared/SrcFiles/SDE_ORAR12_Adaptor.SDE_ORA_ExchangeRateGeneral.txt].] with error code [1049600].]
    Any thoughts?

  • ETL load for Project Analytics has many Errors

    Shyam
    I created new thread for the ETL load error we are having. Our ETL loaded with the followign status
    Total Number of Tasks 302
    Number of Succesful Tasks 82
    Number of Failed Tasks 214
    I went to the failed tasks at the bottom and looked at the log files and the target tables. For many of them there dont seem to be any problem.
    For example the task SDE_ORA_Project had the status as Failed. When I clicked Details for the line , it said
    Truncate table W_PROJECT_DS - Completed
    SDE_ORA_Project - Completed
    Analyze W_PROJECT_DS - Failed
    And I opened the log file ORA_R12.DATAWAREHOUSE.SDE_ORAR12_Adaptor.SDE_ORA_Project.log under folder SessLog. I just took few lines to show you the status
    Read [178] rows, read [0] error rows for source table [mplt_SA_ORA_Project.LKP_PROJ_CUST{{DSQ}}]
    Read [326] rows, read [0] error rows for source table [HR_ALL_ORGANIZATION_UNITS]
    Read [3] rows, read [0] error rows for source table [mplt_SA_ORA_Project.LKP_PROJECT_TYPE_CLASS_CODE{{DSQ}}]
    Read [2] rows, read [0] error rows for source table [mplt_SA_ORA_Project.LKP_PROJECT_FUNDING_LEVEL_CODE{{DSQ}}]
    Read [3] rows, read [0] error rows for source table [mplt_SA_ORA_Project.LKP_PROJECT_SECURITY_LEVEL{{DSQ}}]
    The session completed with [0] row transformation errors.
    Then i went to the target table W_PROJECT_DS and it had 326 records. THis is the same for many of the failed ones. Remember , our first load failed due to accidental shutdown of the server. This is the second run after I did "Reset Data Sources"
    So the otehr problem is our dashboad still shows sample data ( 1.Overview,2.Rankers&Toppers,3.History&Benching,4.Tiering&Distribution). Some of the Project Ananlytics loaded. So i dont know why it did not change. We are using Administrator user. SHould we use a different user.
    Thanks

    Some detail I found in the log file. Many of the log files had warnign meessage
    1. ORA_R12.DATAWAREHOUSE.SILOS.SIL_GlobalCurrencyGeneral_Update
    TRANSF_1_2_1> CMN_1079 WARNING: Lookup table contains no data.
    TRANSF_1_2_1> DBG_21524 Transform : Lkp_W_GLOBAL_CURR_G
    TRANSF_1_2_1> DBG_21313 Lookup table : W_GLOBAL_CURR_G
    TRANSF_1_2_1> DBG_21562 WARNING : Output rows from Lkp_W_GLOBAL_CURR_G will be the default port value
    2. ORA_R12.DATAWAREHOUSE.SDE_ORAR12_Adaptor.SDE_ORA_CodeDimension_Gl_Account
    TRANSF_1_1_1> CMN_1079 WARNING: Lookup table contains no data.
    TRANSF_1_1_1> DBG_21524 Transform : LKP_W_MASTER_CODE_D_MASTER_CODE
    TRANSF_1_1_1> DBG_21313 Lookup table : W_MASTER_CODE_D
    TRANSF_1_1_1> DBG_21562 WARNING : Output rows from LKP_W_MASTER_CODE_D_MASTER_CODE will be the default port value
    3.ORA_R12.DATAWAREHOUSE.SDE_ORAR12_Adaptor.SDE_ORA_ExchangeRateGeneral_Compress_Full
    MAPPING> TE_7004 Transformation Parse Warning [FROM_CURRENCY||'~'||TO_CURRENCY||'~'||CONVERSION_TYPE||'~'||EFF_FROM_DATE]; transformation continues...
    MAPPING> CMN_1761 Timestamp Event: [Wed Aug 19 17:11:59 2009]
    MAPPING> TE_7004 Transformation Parse Warning [<<PM Parse Warning>> [EFF_FROM_DATE]: operand converted to a string
    ... FROM_CURRENCY||'~'||TO_CURRENCY||'~'||CONVERSION_TYPE||'~'||>>>>EFF_FROM_DATE<<<<]; transformation continues...
    MAPPING> TM_6007 DTM initialized successfully for session [SDE_ORA_ExchangeRateGeneral_Compress_Full]
    Does this point to the issue.
    Thanks

  • Router can perform static route load balance

    Dear All
    I am not sure a question. I need your idea and help. The question is if the router can perform static route load balance. I tested it. The result showed No. If you have any experience on it, could share it with me. I also post my result here. Thank you

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    Normally they can, but you generally need different next hops.  How did you "test".

  • Methods of Informatica ETL  load to SAP R/3

    if any body have idea and detail documentation of  Informatica power connect ETL  load Tool to SAP R/3, please send it across to me..

    I apologise for the late answer, I only joined a few days ago.
    The documentation of PowerCenter is pretty clear about installation procedures, so I would like to know what exactly you need to know. How to install and set up the PowerConnect for SAP R/3? Or what else?
    Or have you downloaded the software but didn't download the documentation?
    Regards,
    Nico

  • Poor Performance in ETL SCD Load

    Hi gurus,
    We are facing some serious performance problems during an UPDATE step, which is part of a SCD type 2 process for Assets (SIL_Vert/SIL_AssetDimension_SCDUpdate). The source system is Siebel CRM. The tools for ETL processing are listed below:
    Informatica PowerCenter 9.1.0 HotFix2 0902 357 (R181 D90)
    Oracle BI Data Warehouse Administration Console (Dac Build AN 10.1.3.4.1.patch.20120711.0516)
    The OOTB mapping for this step is a simple SELECT command - which retrieves historical records from the Dimension to be updated - and the Target table (W_ASSET_D), with no UPDATE Strategy. The session is configured to always perform UPDATEs. We also have set $$UDATE_ALL_HISTORY to "N" in DAC: this way we are only selecting the most recent records from the Dimension history, and the only columns that are effectively updated are the system columns of SCD (EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT, CURRENT_FLG, ...).
    The problem is that the UPDATE command is executed individually by Informatica Powercenter, for each record in W_ASSET_D. For a number of 2.486.000 UPDATEs, we had ~2h of processing - a very poor performance for only one ETL step. Our W_ASSET_D has ~150M records today.
    Some questions for the above:
    - is this an expected average execution duration for this number of records?
    - updates record by record are not optimal, this could be easily overcome by a BULK COLLECT/FORALL method. Is there a way to optimize the method used by Informatica or we need to write our own PL-SQL script and run it in DAC?
    Thanks in advance,
    Guilherme

    Hi,
    Thank you for posting in Windows Server Forum.
    Initially please check the configuration & requirement part for RemoteFX. You can follow below article for further research.
    RemoteFX vGPU Setup and Configuration Guide for Windows Server 2012
    http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support

  • Full ETL Load into PRD working around system downtime

    The problem we have got is some of the Informatica ETL tasks will run 30+ hours on Prod. But, we got overnight backups and batches, which means we got only 18 hours max per day and we got to stop ETL at this point. When we stop the ETL, it sends a truncate signal and wipes all the data out.
    Is there a way to make a task not send the truncate signal, instead commit to that execution point and start from there if the ETL is stopped and started due to overnight activities on the production system? Or is there a way to say this task will run at this particular time, so that we can cancel the overnight backups?
    Thanks in advance for any help you can offer

    Hi,
    The tables which get truncated or a rollback occurs in both in SDE's and SIL's. Its only particular tables which get rolled back, not the entire warehouse. This happens in particular SDE's and SIL's, as some of the tasks take quiet a while to complete. Yes, we have gone through Oracle performance recommendations and applied most of them which could be done within the timeframe we have got.
    For ex, say Payroll data is being loaded, which might take 25+ hours, we got a daily window of 17 hours. At this point, when I stop the ETL, it marks the task as failed, either SDE or SIL. What we want to do at this point is, for the staging/target table to commit itself instead of doing a rollback. This will enable the task to pick up from this point rather than running all over again.
    I had tried changing the commit settings to 5000 rows instead of 10,000 rows by default, in Workflow manager. Also, I enabled persistent cache for these mappings, and changed the 'recovery strategy' to 'restart from previous commit point', both seemed to have made no difference.
    Thanks.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

Maybe you are looking for

  • Short Dump when the Bex-Report scheduled for Broadcast

    Hi Guru's I am getting the short dump when I am scheduling the Bex-Report for Broadcasting. Need your help to reslove this issue. Thanks Navin Note The following error text was processed in the system BDV : The current application triggered a termina

  • How does ARD compare with Screen Sharing

    I make a lot of use of Screen Sharing for managing multiple computers across the internet. However, with the drastically reduced price of ARD on the App Store I am tempted to replace my Screen sharing usage. However, if all I need is screen sharing d

  • Netbeans 4.1 Recompiling even if the source code is not modified

    I am new to NetBeans 4.1 So Please guide me I have 300 Java Source files I installed Netbeans and have my source code in Netbeans But as soon as I run Executable.java It compiles all the files and Runs it But afterwards when I run that file again it

  • Vista 32-bit + After Effects

    Ok, so ive got Adobe After Effects 7. Whenever i open it, gay Vista (no offence xD) has to change Colour Scheme -.- therefore when i try to use the program my mouse pointer lags so the program is un-useable because i cant click anything. So just wond

  • I am really stuck with JCheckBox using it as a JTable cell editor...

    i need to focus next table cell, when user press a key on JcheckBox used in JTable as a cell editor. Especially i need to bind "ENTER" key with this action. I tried to addKeyListener do checkbox, tried to add action into input maps of jatble, and als