What is staging table in BW?

Hi Experts,
Can you tell me what is staging table in BW. What does "staging" means? What is the definiction?definition? Where can i get more information about staging table ?
Thanks very much for your input.

HI,
Staging Table is nothing but the PSA Table.Here data will be stored in same as source system format(Raw data).These are the Transparent tables Two dimentional.Staging means before processing the data we will bring the data and store it in the same source system format.By this we can avoid disturbing the OLTP Syatem,because daily transactions are taking place in the OLTP syatem.In BI 7.0 PSA is mandatory.If data volume is very small in that case we can avoid PSA.
Tarak

Similar Messages

  • My requirement is to update 3 valuesets daily based on data coming to my staging table. What is the API used for this and how to map any API to our staging table? I am totally new to oracle and apps. Please help. Thanks!

    My requirement is to update 3 valuesets daily based on data coming to my staging table. What is the API used for this and how to map any API to our staging table? I am totally new to oracle and apps. Please help. Thanks!

    Hi,
    You could use FND_FLEX_LOADER_APIS.UP_VALUE_SET_VALUE to upload them from staging table (I suppose you mean value set values...).
    You can find a sample scripts if you google around.
    What do you mean "how to map any API to our staging table" ?
    You should do at least the following mapping (which column(s) in the staging table will provide these information):
    - the 3 value sets name which you're going to update/upload (I suppose these are existing value sets or which have been already created)
    - the value set values and  description
    Try to start with something and if there is any issues the community could then help... but for the time being with the description of the problem you have provided, that's the best I can do...

  • Unable to load data from FDMEE staging table to HFM tables

    Hi,
    We have installed EPM 11.1.2.3 with all latest related products (ODI/FDMEE) in our development environment
    We are inprocess of loading data from EBS R12 to HFM using ERPI (Data Managemwnt in EPM 11.1.2.3). We could Import and validate the data but when we try to Export the data to HFM, the process continuously running hours (neither it gives any error nor it completes).
    When we check the process details in ODI work repository, following processes are completed successfully
    COMM_LOAD_BALANCES - Running .........(From past 1 day, still running)
    EBS_GL_LOAD_BALANCES_DATA - Successful
    COMM_LOAD_BALANCES - Successful
    Whereas we could load data in staging table of FDMEE database schema and moreover we are even able to drill through to source system (EBS R12) from Data load workbench but not able to load the data into HFM application.
    Log details from the process are below.
    2013-11-05 17:04:59,747 INFO  [AIF]: FDMEE Process Start, Process ID: 31
    2013-11-05 17:04:59,747 INFO  [AIF]: FDMEE Logging Level: 4
    2013-11-05 17:04:59,748 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_31.log
    2013-11-05 17:04:59,748 INFO  [AIF]: User:admin
    2013-11-05 17:04:59,748 INFO  [AIF]: Location:VisionLoc (Partitionkey:1)
    2013-11-05 17:04:59,749 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-05 17:04:59,749 INFO  [AIF]: Category Name:VisionCat (Category key:3)
    2013-11-05 17:04:59,749 INFO  [AIF]: Rule Name:VisionRule (Rule ID:2)
    2013-11-05 17:05:00,844 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-05 17:05:00,844 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-05 17:05:02,910 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-05 17:05:02,953 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-05 17:05:03,030 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-05 17:05:03,108 INFO  [AIF]:
    Move Data for Period 'JAN'
    Any help on above is much appreciated.
    Thank you
    Regards
    Praneeth

    Hi,
    I have followed the steps 1 & 2 above. Noe the log shows something like below
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Process Start, Process ID: 22
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Logging Level: 4
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_22.log
    2013-11-05 09:47:31,180 INFO  [AIF]: User:admin
    2013-11-05 09:47:31,180 INFO  [AIF]: Location:VisionLoc (Partitionkey:1)
    2013-11-05 09:47:31,180 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-05 09:47:31,180 INFO  [AIF]: Category Name:VisionCat (Category key:3)
    2013-11-05 09:47:31,181 INFO  [AIF]: Rule Name:VisionRule (Rule ID:2)
    2013-11-05 09:47:32,378 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-05 09:47:32,378 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-05 09:47:34,652 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-05 09:47:34,698 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-05 09:47:34,744 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-05 09:47:34,828 INFO  [AIF]:
    Move Data for Period 'JAN'
    2013-11-08 11:49:10,478 INFO  [AIF]: FDMEE Process Start, Process ID: 22
    2013-11-08 11:49:10,493 INFO  [AIF]: FDMEE Logging Level: 5
    2013-11-08 11:49:10,493 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_22.log
    2013-11-08 11:49:10,494 INFO  [AIF]: User:admin
    2013-11-08 11:49:10,494 INFO  [AIF]: Location:VISIONLOC (Partitionkey:1)
    2013-11-08 11:49:10,494 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-08 11:49:10,495 INFO  [AIF]: Category Name:VISIONCAT (Category key:1)
    2013-11-08 11:49:10,495 INFO  [AIF]: Rule Name:VISIONRULE (Rule ID:1)
    2013-11-08 11:49:11,903 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-08 11:49:11,909 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-08 11:49:14,037 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-08 11:49:14,105 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-08 11:49:14,152 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-08 11:49:14,178 DEBUG [AIF]: CommData.exportData - START
    2013-11-08 11:49:14,183 DEBUG [AIF]: CommData.getRuleInfo - START
    2013-11-08 11:49:14,188 DEBUG [AIF]:
            SELECT brl.RULE_ID
            ,br.RULE_NAME
            ,brl.PARTITIONKEY
            ,brl.CATKEY
            ,part.PARTVALGROUP
            ,br.SOURCE_SYSTEM_ID
            ,ss.SOURCE_SYSTEM_TYPE
            ,CASE
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'EBS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'PS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FUSION%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FILE%' THEN 'N'
              ELSE 'Y'
            END SOURCE_ADAPTER_FLAG
            ,app.APPLICATION_ID
            ,app.TARGET_APPLICATION_NAME
            ,app.TARGET_APPLICATION_TYPE
            ,app.DATA_LOAD_METHOD
            ,brl.PLAN_TYPE
            ,CASE brl.PLAN_TYPE
              WHEN 'PLAN1' THEN 1
              WHEN 'PLAN2' THEN 2
              WHEN 'PLAN3' THEN 3
              WHEN 'PLAN4' THEN 4
              WHEN 'PLAN5' THEN 5
              ELSE 0
            END PLAN_NUMBER
            ,br.INCL_ZERO_BALANCE_FLAG
            ,br.PERIOD_MAPPING_TYPE
            ,br.INCLUDE_ADJ_PERIODS_FLAG
            ,br.BALANCE_TYPE ACTUAL_FLAG
            ,br.AMOUNT_TYPE
            ,br.BALANCE_SELECTION
            ,br.BALANCE_METHOD_CODE
            ,COALESCE(br.SIGNAGE_METHOD, 'ABSOLUTE') SIGNAGE_METHOD
            ,br.CURRENCY_CODE
            ,br.BAL_SEG_VALUE_OPTION_CODE
            ,brl.EXECUTION_MODE
            ,COALESCE(brl.IMPORT_FROM_SOURCE_FLAG, 'Y') IMPORT_FROM_SOURCE_FLAG
            ,COALESCE(brl.RECALCULATE_FLAG, 'N') RECALCULATE_FLAG
            ,COALESCE(brl.EXPORT_TO_TARGET_FLAG, 'N') EXPORT_TO_TARGET_FLAG
            ,CASE
              WHEN (br.LEDGER_GROUP_ID IS NOT NULL) THEN 'MULTI'
              WHEN (br.SOURCE_LEDGER_ID IS NOT NULL) THEN 'SINGLE'
              ELSE 'NONE'
            END LEDGER_GROUP_CODE
            ,COALESCE(br.BALANCE_AMOUNT_BS, 'YTD') BALANCE_AMOUNT_BS
            ,COALESCE(br.BALANCE_AMOUNT_IS, 'PERIODIC') BALANCE_AMOUNT_IS
            ,br.LEDGER_GROUP
            ,(SELECT brd.DETAIL_CODE
              FROM AIF_BAL_RULE_DETAILS brd
              WHERE brd.RULE_ID = br.RULE_ID
              AND brd.DETAIL_TYPE = 'LEDGER'       
            ) PS_LEDGER
            ,CASE lg.LEDGER_TEMPLATE
              WHEN 'COMMITMENT' THEN 'Y'
              ELSE 'N'
            END KK_FLAG
            ,p.LAST_UPDATED_BY
            ,p.AIF_WEB_SERVICE_URL WEB_SERVICE_URL
            ,p.EPM_ORACLE_INSTANCE
            FROM AIF_PROCESSES p
            INNER JOIN AIF_BAL_RULE_LOADS brl
              ON brl.LOADID = p.PROCESS_ID
            INNER JOIN AIF_BALANCE_RULES br
              ON br.RULE_ID = brl.RULE_ID
            INNER JOIN AIF_SOURCE_SYSTEMS ss
              ON ss.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
            INNER JOIN AIF_TARGET_APPLICATIONS app
              ON app.APPLICATION_ID = brl.APPLICATION_ID
            INNER JOIN TPOVPARTITION part
              ON part.PARTITIONKEY = br.PARTITIONKEY
            INNER JOIN TBHVIMPGROUP imp
              ON imp.IMPGROUPKEY = part.PARTIMPGROUP
            LEFT OUTER JOIN AIF_COA_LEDGERS l
              ON l.SOURCE_SYSTEM_ID = p.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = COALESCE(br.SOURCE_LEDGER_ID,imp.IMPSOURCELEDGERID)
            LEFT OUTER JOIN AIF_PS_SET_CNTRL_REC_STG scr
              ON scr.SOURCE_SYSTEM_ID = l.SOURCE_SYSTEM_ID
              AND scr.SETCNTRLVALUE = l.SOURCE_LEDGER_NAME
              AND scr.RECNAME = 'LED_GRP_TBL'
            LEFT OUTER JOIN AIF_PS_LED_GRP_TBL_STG lg
              ON lg.SOURCE_SYSTEM_ID = scr.SOURCE_SYSTEM_ID
              AND lg.SETID = scr.SETID
              AND lg.LEDGER_GROUP = br.LEDGER_GROUP
            WHERE p.PROCESS_ID = 22
    2013-11-08 11:49:14,195 DEBUG [AIF]:
          SELECT adim.BALANCE_COLUMN_NAME DIMNAME
          ,adim.DIMENSION_ID
          ,dim.TARGET_DIMENSION_CLASS_NAME
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID1
          ) COA_SEGMENT_NAME1
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID2
          ) COA_SEGMENT_NAME2
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID3
          ) COA_SEGMENT_NAME3
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID4
          ) COA_SEGMENT_NAME4
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID5
          ) COA_SEGMENT_NAME5
          ,(SELECT CASE mdd.ORPHAN_OPTION_CODE
              WHEN 'CHILD' THEN 'N'
              WHEN 'ROOT' THEN 'N'
              ELSE 'Y'
            END DIMENSION_FILTER_FLAG
            FROM AIF_MAP_DIM_DETAILS_V mdd
            ,AIF_MAPPING_RULES mr
            WHERE mr.PARTITIONKEY = tpp.PARTITIONKEY
            AND mdd.RULE_ID = mr.RULE_ID
            AND mdd.DIMENSION_ID = adim.DIMENSION_ID
          ) DIMENSION_FILTER_FLAG
          ,tiie.IMPCONCATCHAR
          FROM TPOVPARTITION tpp
          INNER JOIN AIF_TARGET_APPL_DIMENSIONS adim
            ON adim.APPLICATION_ID = 2
          INNER JOIN AIF_DIMENSIONS dim
            ON dim.DIMENSION_ID = adim.DIMENSION_ID
          LEFT OUTER JOIN TBHVIMPITEMERPI tiie
            ON tiie.IMPGROUPKEY = tpp.PARTIMPGROUP
            AND tiie.IMPFLDFIELDNAME = adim.BALANCE_COLUMN_NAME
            AND tiie.IMPMAPTYPE = 'ERP'
          WHERE tpp.PARTITIONKEY = 1
          AND adim.BALANCE_COLUMN_NAME IS NOT NULL
          ORDER BY adim.BALANCE_COLUMN_NAME
    2013-11-08 11:49:14,197 DEBUG [AIF]: {'APPLICATION_ID': 2L, 'IMPORT_FROM_SOURCE_FLAG': u'N', 'PLAN_TYPE': None, 'RULE_NAME': u'VISIONRULE', 'ACTUAL_FLAG': u'A', 'IS_INCREMENTAL_LOAD': False, 'EPM_ORACLE_INSTANCE': u'C:\\Oracle\\Middleware\\user_projects\\epmsystem1', 'CATKEY': 1L, 'BAL_SEG_VALUE_OPTION_CODE': u'ALL', 'INCLUDE_ADJ_PERIODS_FLAG': u'N', 'PERIOD_MAPPING_TYPE': u'EXPLICIT', 'SOURCE_SYSTEM_TYPE': u'EBS_R12', 'LEDGER_GROUP': None, 'TARGET_APPLICATION_NAME': u'OASIS', 'RECALCULATE_FLAG': u'N', 'SOURCE_SYSTEM_ID': 2L, 'TEMP_DATA_TABLE_NAME': 'TDATASEG_T', 'KK_FLAG': u'N', 'AMOUNT_TYPE': u'MONETARY', 'EXPORT_TO_TARGET_FLAG': u'Y', 'DATA_TABLE_NAME': 'TDATASEG', 'DIMNAME_LIST': [u'ACCOUNT', u'ENTITY', u'ICP', u'UD1', u'UD2', u'UD3', u'UD4'], 'TDATAMAPTYPE': 'ERP', 'LAST_UPDATED_BY': u'admin', 'DIMNAME_MAP': {u'UD3': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT5', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD3', 'DIMENSION_ID': 9L}, u'ICP': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'ICP', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT7', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ICP', 'DIMENSION_ID': 8L}, u'ENTITY': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Entity', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT1', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ENTITY', 'DIMENSION_ID': 12L}, u'UD2': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT4', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD2', 'DIMENSION_ID': 11L}, u'UD4': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT6', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD4', 'DIMENSION_ID': 1L}, u'ACCOUNT': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Account', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT3', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ACCOUNT', 'DIMENSION_ID': 10L}, u'UD1': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT2', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD1', 'DIMENSION_ID': 7L}}, 'TARGET_APPLICATION_TYPE': u'HFM', 'PARTITIONKEY': 1L, 'PARTVALGROUP': u'[NONE]', 'LEDGER_GROUP_CODE': u'SINGLE', 'INCLUDE_ZERO_BALANCE_FLAG': u'N', 'EXECUTION_MODE': u'SNAPSHOT', 'PLAN_NUMBER': 0L, 'PS_LEDGER': None, 'BALANCE_SELECTION': u'FUNCTIONAL', 'BALANCE_AMOUNT_IS': u'PERIODIC', 'RULE_ID': 1L, 'BALANCE_AMOUNT_BS': u'YTD', 'CURRENCY_CODE': None, 'SOURCE_ADAPTER_FLAG': u'N', 'BALANCE_METHOD_CODE': u'STANDARD', 'SIGNAGE_METHOD': u'SAME', 'WEB_SERVICE_URL': u'http://localhost:9000/aif', 'DATA_LOAD_METHOD': u'CLASSIC_VIA_EPMI'}
    2013-11-08 11:49:14,197 DEBUG [AIF]: CommData.getRuleInfo - END
    2013-11-08 11:49:14,224 DEBUG [AIF]: CommData.insertPeriods - START
    2013-11-08 11:49:14,228 DEBUG [AIF]: CommData.getLedgerListAndMap - START
    2013-11-08 11:49:14,229 DEBUG [AIF]: CommData.getLedgerSQL - START
    2013-11-08 11:49:14,229 DEBUG [AIF]: CommData.getLedgerSQL - END
    2013-11-08 11:49:14,229 DEBUG [AIF]:
              SELECT l.SOURCE_LEDGER_ID
              ,l.SOURCE_LEDGER_NAME
              ,l.SOURCE_COA_ID
              ,l.CALENDAR_ID
              ,'0' SETID
              ,l.PERIOD_TYPE
              ,NULL LEDGER_TABLE_NAME
              FROM AIF_BALANCE_RULES br
              ,AIF_COA_LEDGERS l
              WHERE br.RULE_ID = 1
              AND l.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = br.SOURCE_LEDGER_ID
    2013-11-08 11:49:14,230 DEBUG [AIF]: CommData.getLedgerListAndMap - END
    2013-11-08 11:49:14,232 DEBUG [AIF]:
            INSERT INTO AIF_PROCESS_PERIODS (
              PROCESS_ID
              ,PERIODKEY
              ,PERIOD_ID
              ,ADJUSTMENT_PERIOD_FLAG
              ,GL_PERIOD_YEAR
              ,GL_PERIOD_NUM
              ,GL_PERIOD_NAME
              ,GL_PERIOD_CODE
              ,GL_EFFECTIVE_PERIOD_NUM
              ,YEARTARGET
              ,PERIODTARGET
              ,IMP_ENTITY_TYPE
              ,IMP_ENTITY_ID
              ,IMP_ENTITY_NAME
              ,TRANS_ENTITY_TYPE
              ,TRANS_ENTITY_ID
              ,TRANS_ENTITY_NAME
              ,PRIOR_PERIOD_FLAG
              ,SOURCE_LEDGER_ID
                    SELECT DISTINCT brl.LOADID PROCESS_ID
                    ,pp.PERIODKEY PERIODKEY
                    ,prd.PERIOD_ID
                    ,COALESCE(prd.ADJUSTMENT_PERIOD_FLAG, 'N') ADJUSTMENT_PERIOD_FLAG
                    ,COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) GL_PERIOD_YEAR
                    ,COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0) GL_PERIOD_NUM
                    ,COALESCE(prd.PERIOD_NAME, ppsrc.GL_PERIOD_NAME,'0') GL_PERIOD_NAME
                    ,COALESCE(prd.PERIOD_CODE, CAST(COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0) AS VARCHAR(38)),'0') GL_PERIOD_CODE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) GL_EFFECTIVE_PERIOD_NUM
                    ,COALESCE(ppa.YEARTARGET, pp.YEARTARGET) YEARTARGET
                    ,COALESCE(ppa.PERIODTARGET, pp.PERIODTARGET) PERIODTARGET
                    ,'PROCESS_BAL_IMP' IMP_ENTITY_TYPE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) IMP_ENTITY_ID
                    ,COALESCE(prd.PERIOD_NAME, ppsrc.GL_PERIOD_NAME,'0') IMP_ENTITY_NAME
                    ,'PROCESS_BAL_TRANS' TRANS_ENTITY_TYPE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) TRANS_ENTITY_ID
                    ,pp.PERIODDESC TRANS_ENTITY_NAME
                    ,'N' PRIOR_PERIOD_FLAG
                    ,2202 SOURCE_LEDGER_ID
                    FROM (
                      AIF_BAL_RULE_LOADS brl
                      INNER JOIN TPOVCATEGORY pc
                        ON pc.CATKEY = brl.CATKEY
                      INNER JOIN TPOVPERIOD_FLAT_V pp
                        ON pp.PERIODFREQ = pc.CATFREQ
                        AND pp.PERIODKEY >= brl.START_PERIODKEY
                        AND pp.PERIODKEY <= brl.END_PERIODKEY
                      LEFT OUTER JOIN TPOVPERIODADAPTOR_FLAT_V ppa
                        ON ppa.PERIODKEY = pp.PERIODKEY
                        AND ppa.PERIODFREQ = pp.PERIODFREQ
                        AND ppa.INTSYSTEMKEY = 'OASIS'
                    INNER JOIN TPOVPERIODSOURCE ppsrc
                      ON ppsrc.PERIODKEY = pp.PERIODKEY
                      AND ppsrc.MAPPING_TYPE = 'EXPLICIT'
                      AND ppsrc.SOURCE_SYSTEM_ID = 2
                      AND ppsrc.CALENDAR_ID IN ('29067')
                    LEFT OUTER JOIN AIF_GL_PERIODS_STG prd
                      ON prd.PERIOD_ID = ppsrc.PERIOD_ID
                      AND prd.SOURCE_SYSTEM_ID = ppsrc.SOURCE_SYSTEM_ID
                      AND prd.CALENDAR_ID = ppsrc.CALENDAR_ID
              AND prd.SETID = '0'
              AND prd.PERIOD_TYPE = '507'
                      AND prd.ADJUSTMENT_PERIOD_FLAG = 'N'
                    WHERE brl.LOADID = 22
                    ORDER BY pp.PERIODKEY
                    ,GL_EFFECTIVE_PERIOD_NUM
    2013-11-08 11:49:14,235 DEBUG [AIF]: CommData.insertPeriods - END
    2013-11-08 11:49:14,240 DEBUG [AIF]: CommData.moveData - START
    2013-11-08 11:49:14,242 DEBUG [AIF]: CommData.getPovList - START
    2013-11-08 11:49:14,242 DEBUG [AIF]:
            SELECT PARTITIONKEY
            ,PARTNAME
            ,CATKEY
            ,CATNAME
            ,PERIODKEY
            ,COALESCE(PERIODDESC, TO_CHAR(PERIODKEY,'YYYY-MM-DD HH24:MI:SS')) PERIODDESC
            ,RULE_ID
            ,RULE_NAME
            FROM (
              SELECT DISTINCT brl.PARTITIONKEY
              ,part.PARTNAME
              ,brl.CATKEY
              ,cat.CATNAME
              ,pprd.PERIODKEY
              ,pp.PERIODDESC
              ,brl.RULE_ID
              ,br.RULE_NAME
              FROM AIF_BAL_RULE_LOADS brl
              INNER JOIN AIF_BALANCE_RULES br
                ON br.RULE_ID = brl.RULE_ID
              INNER JOIN TPOVPARTITION part
                ON part.PARTITIONKEY = brl.PARTITIONKEY
              INNER JOIN TPOVCATEGORY cat
                ON cat.CATKEY = brl.CATKEY
              INNER JOIN AIF_PROCESS_PERIODS pprd
                ON pprd.PROCESS_ID = brl.LOADID
              LEFT OUTER JOIN TPOVPERIOD pp
                ON pp.PERIODKEY = pprd.PERIODKEY
              WHERE brl.LOADID = 22
            ) q
            ORDER BY PARTITIONKEY
            ,CATKEY
            ,PERIODKEY
            ,RULE_ID
    2013-11-08 11:49:14,244 DEBUG [AIF]: CommData.getPovList - END
    2013-11-08 11:49:14,245 INFO  [AIF]:
    Move Data for Period 'JAN'
    2013-11-08 11:49:14,246 DEBUG [AIF]:
              UPDATE TDATASEG
              SET LOADID = 22
              WHERE PARTITIONKEY = 1
              AND CATKEY = 1
              AND RULE_ID = 1
              AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,320 DEBUG [AIF]: Number of Rows updated on TDATASEG: 1842
    2013-11-08 11:49:14,320 DEBUG [AIF]:
            INSERT INTO AIF_APPL_LOAD_AUDIT (
              LOADID
              ,TARGET_APPLICATION_TYPE
              ,TARGET_APPLICATION_NAME
              ,PLAN_TYPE
              ,SOURCE_LEDGER_ID
              ,EPM_YEAR
              ,EPM_PERIOD
              ,SNAPSHOT_FLAG
              ,SEGMENT_FILTER_FLAG
              ,PARTITIONKEY
              ,CATKEY
              ,RULE_ID
              ,PERIODKEY
              ,EXPORT_TO_TARGET_FLAG
            SELECT DISTINCT 22
            ,TARGET_APPLICATION_TYPE
            ,TARGET_APPLICATION_NAME
            ,PLAN_TYPE
            ,SOURCE_LEDGER_ID
            ,EPM_YEAR
            ,EPM_PERIOD
            ,SNAPSHOT_FLAG
            ,SEGMENT_FILTER_FLAG
            ,PARTITIONKEY
            ,CATKEY
            ,RULE_ID
            ,PERIODKEY
            ,'N'
            FROM AIF_APPL_LOAD_AUDIT
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND RULE_ID = 1
            AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,321 DEBUG [AIF]: Number of Rows inserted into AIF_APPL_LOAD_AUDIT: 1
    2013-11-08 11:49:14,322 DEBUG [AIF]:
            INSERT INTO AIF_APPL_LOAD_PRD_AUDIT (
              LOADID
              ,GL_PERIOD_ID
              ,GL_PERIOD_YEAR
              ,DELTA_RUN_ID
              ,PARTITIONKEY
              ,CATKEY
              ,RULE_ID
              ,PERIODKEY
            SELECT DISTINCT 22
            ,GL_PERIOD_ID
            ,GL_PERIOD_YEAR
            ,DELTA_RUN_ID
            ,PARTITIONKEY
            ,CATKEY
            ,RULE_ID
            ,PERIODKEY
            FROM AIF_APPL_LOAD_PRD_AUDIT
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND RULE_ID = 1
            AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,323 DEBUG [AIF]: Number of Rows inserted into AIF_APPL_LOAD_PRD_AUDIT: 1
    2013-11-08 11:49:14,325 DEBUG [AIF]: CommData.moveData - END
    2013-11-08 11:49:14,332 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,332 DEBUG [AIF]:
        SELECT tlp.PROCESSSTATUS
        ,tlps.PROCESSSTATUSDESC
        ,CASE WHEN (tlp.INTLOCKSTATE = 60) THEN 'Y' ELSE 'N' END LOCK_FLAG
        FROM TLOGPROCESS tlp
        ,TLOGPROCESSSTATES tlps
        WHERE tlp.PARTITIONKEY = 1
        AND tlp.CATKEY = 1
        AND tlp.PERIODKEY = '2012-01-01'
        AND tlp.RULE_ID = 1
        AND tlps.PROCESSSTATUSKEY = tlp.PROCESSSTATUS
    2013-11-08 11:49:14,336 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 20
              ,PROCESSEXP = 0
              ,PROCESSENTLOAD = 0
              ,PROCESSENTVAL = 0
              ,PROCESSEXPNOTE = NULL
              ,PROCESSENTLOADNOTE = NULL
              ,PROCESSENTVALNOTE = NULL
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,338 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,339 DEBUG [AIF]: CommData.purgeInvalidRecordsTDATASEG - START
    2013-11-08 11:49:14,341 DEBUG [AIF]:
            DELETE FROM TDATASEG
            WHERE LOADID = 22
              AND (
            PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
            AND VALID_FLAG = 'N'
    2013-11-08 11:49:14,342 DEBUG [AIF]: Number of Rows deleted from TDATASEG: 0
    2013-11-08 11:49:14,342 DEBUG [AIF]: CommData.purgeInvalidRecordsTDATASEG - END
    2013-11-08 11:49:14,344 DEBUG [AIF]: CommData.updateAppLoadAudit - START
    2013-11-08 11:49:14,344 DEBUG [AIF]:
            UPDATE AIF_APPL_LOAD_AUDIT
            SET EXPORT_TO_TARGET_FLAG = 'Y'
            WHERE LOADID = 22
            AND PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY= '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,345 DEBUG [AIF]: Number of Rows updated on AIF_APPL_LOAD_AUDIT: 1
    2013-11-08 11:49:14,345 DEBUG [AIF]: CommData.updateAppLoadAudit - END
    2013-11-08 11:49:14,345 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,346 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 21
              ,PROCESSEXP = 1
              ,PROCESSEXPNOTE = 'Export Successful'
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,347 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,347 DEBUG [AIF]: CommData.exportData - END
    2013-11-08 11:49:14,404 DEBUG [AIF]: HfmData.loadData - START
    2013-11-08 11:49:14,404 DEBUG [AIF]: CommData.getRuleInfo - START
    2013-11-08 11:49:14,404 DEBUG [AIF]:
            SELECT brl.RULE_ID
            ,br.RULE_NAME
            ,brl.PARTITIONKEY
            ,brl.CATKEY
            ,part.PARTVALGROUP
            ,br.SOURCE_SYSTEM_ID
            ,ss.SOURCE_SYSTEM_TYPE
            ,CASE
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'EBS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'PS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FUSION%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FILE%' THEN 'N'
              ELSE 'Y'
            END SOURCE_ADAPTER_FLAG
            ,app.APPLICATION_ID
            ,app.TARGET_APPLICATION_NAME
            ,app.TARGET_APPLICATION_TYPE
            ,app.DATA_LOAD_METHOD
            ,brl.PLAN_TYPE
            ,CASE brl.PLAN_TYPE
              WHEN 'PLAN1' THEN 1
              WHEN 'PLAN2' THEN 2
              WHEN 'PLAN3' THEN 3
              WHEN 'PLAN4' THEN 4
              WHEN 'PLAN5' THEN 5
              ELSE 0
            END PLAN_NUMBER
            ,br.INCL_ZERO_BALANCE_FLAG
            ,br.PERIOD_MAPPING_TYPE
            ,br.INCLUDE_ADJ_PERIODS_FLAG
            ,br.BALANCE_TYPE ACTUAL_FLAG
            ,br.AMOUNT_TYPE
            ,br.BALANCE_SELECTION
            ,br.BALANCE_METHOD_CODE
            ,COALESCE(br.SIGNAGE_METHOD, 'ABSOLUTE') SIGNAGE_METHOD
            ,br.CURRENCY_CODE
            ,br.BAL_SEG_VALUE_OPTION_CODE
            ,brl.EXECUTION_MODE
            ,COALESCE(brl.IMPORT_FROM_SOURCE_FLAG, 'Y') IMPORT_FROM_SOURCE_FLAG
            ,COALESCE(brl.RECALCULATE_FLAG, 'N') RECALCULATE_FLAG
            ,COALESCE(brl.EXPORT_TO_TARGET_FLAG, 'N') EXPORT_TO_TARGET_FLAG
            ,CASE
              WHEN (br.LEDGER_GROUP_ID IS NOT NULL) THEN 'MULTI'
              WHEN (br.SOURCE_LEDGER_ID IS NOT NULL) THEN 'SINGLE'
              ELSE 'NONE'
            END LEDGER_GROUP_CODE
            ,COALESCE(br.BALANCE_AMOUNT_BS, 'YTD') BALANCE_AMOUNT_BS
            ,COALESCE(br.BALANCE_AMOUNT_IS, 'PERIODIC') BALANCE_AMOUNT_IS
            ,br.LEDGER_GROUP
            ,(SELECT brd.DETAIL_CODE
              FROM AIF_BAL_RULE_DETAILS brd
              WHERE brd.RULE_ID = br.RULE_ID
              AND brd.DETAIL_TYPE = 'LEDGER'       
            ) PS_LEDGER
            ,CASE lg.LEDGER_TEMPLATE
              WHEN 'COMMITMENT' THEN 'Y'
              ELSE 'N'
            END KK_FLAG
            ,p.LAST_UPDATED_BY
            ,p.AIF_WEB_SERVICE_URL WEB_SERVICE_URL
            ,p.EPM_ORACLE_INSTANCE
            FROM AIF_PROCESSES p
            INNER JOIN AIF_BAL_RULE_LOADS brl
              ON brl.LOADID = p.PROCESS_ID
            INNER JOIN AIF_BALANCE_RULES br
              ON br.RULE_ID = brl.RULE_ID
            INNER JOIN AIF_SOURCE_SYSTEMS ss
              ON ss.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
            INNER JOIN AIF_TARGET_APPLICATIONS app
              ON app.APPLICATION_ID = brl.APPLICATION_ID
            INNER JOIN TPOVPARTITION part
              ON part.PARTITIONKEY = br.PARTITIONKEY
            INNER JOIN TBHVIMPGROUP imp
              ON imp.IMPGROUPKEY = part.PARTIMPGROUP
            LEFT OUTER JOIN AIF_COA_LEDGERS l
              ON l.SOURCE_SYSTEM_ID = p.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = COALESCE(br.SOURCE_LEDGER_ID,imp.IMPSOURCELEDGERID)
            LEFT OUTER JOIN AIF_PS_SET_CNTRL_REC_STG scr
              ON scr.SOURCE_SYSTEM_ID = l.SOURCE_SYSTEM_ID
              AND scr.SETCNTRLVALUE = l.SOURCE_LEDGER_NAME
              AND scr.RECNAME = 'LED_GRP_TBL'
            LEFT OUTER JOIN AIF_PS_LED_GRP_TBL_STG lg
              ON lg.SOURCE_SYSTEM_ID = scr.SOURCE_SYSTEM_ID
              AND lg.SETID = scr.SETID
              AND lg.LEDGER_GROUP = br.LEDGER_GROUP
            WHERE p.PROCESS_ID = 22
    2013-11-08 11:49:14,406 DEBUG [AIF]:
          SELECT adim.BALANCE_COLUMN_NAME DIMNAME
          ,adim.DIMENSION_ID
          ,dim.TARGET_DIMENSION_CLASS_NAME
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID1
          ) COA_SEGMENT_NAME1
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID2
          ) COA_SEGMENT_NAME2
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID3
          ) COA_SEGMENT_NAME3
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID4
          ) COA_SEGMENT_NAME4
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID5
          ) COA_SEGMENT_NAME5
          ,(SELECT CASE mdd.ORPHAN_OPTION_CODE
              WHEN 'CHILD' THEN 'N'
              WHEN 'ROOT' THEN 'N'
              ELSE 'Y'
            END DIMENSION_FILTER_FLAG
            FROM AIF_MAP_DIM_DETAILS_V mdd
            ,AIF_MAPPING_RULES mr
            WHERE mr.PARTITIONKEY = tpp.PARTITIONKEY
            AND mdd.RULE_ID = mr.RULE_ID
            AND mdd.DIMENSION_ID = adim.DIMENSION_ID
          ) DIMENSION_FILTER_FLAG
          ,tiie.IMPCONCATCHAR
          FROM TPOVPARTITION tpp
          INNER JOIN AIF_TARGET_APPL_DIMENSIONS adim
            ON adim.APPLICATION_ID = 2
          INNER JOIN AIF_DIMENSIONS dim
            ON dim.DIMENSION_ID = adim.DIMENSION_ID
          LEFT OUTER JOIN TBHVIMPITEMERPI tiie
            ON tiie.IMPGROUPKEY = tpp.PARTIMPGROUP
            AND tiie.IMPFLDFIELDNAME = adim.BALANCE_COLUMN_NAME
            AND tiie.IMPMAPTYPE = 'ERP'
          WHERE tpp.PARTITIONKEY = 1
          AND adim.BALANCE_COLUMN_NAME IS NOT NULL
          ORDER BY adim.BALANCE_COLUMN_NAME
    2013-11-08 11:49:14,407 DEBUG [AIF]: {'APPLICATION_ID': 2L, 'IMPORT_FROM_SOURCE_FLAG': u'N', 'PLAN_TYPE': None, 'RULE_NAME': u'VISIONRULE', 'ACTUAL_FLAG': u'A', 'IS_INCREMENTAL_LOAD': False, 'EPM_ORACLE_INSTANCE': u'C:\\Oracle\\Middleware\\user_projects\\epmsystem1', 'CATKEY': 1L, 'BAL_SEG_VALUE_OPTION_CODE': u'ALL', 'INCLUDE_ADJ_PERIODS_FLAG': u'N', 'PERIOD_MAPPING_TYPE': u'EXPLICIT', 'SOURCE_SYSTEM_TYPE': u'EBS_R12', 'LEDGER_GROUP': None, 'TARGET_APPLICATION_NAME': u'OASIS', 'RECALCULATE_FLAG': u'N', 'SOURCE_SYSTEM_ID': 2L, 'TEMP_DATA_TABLE_NAME': 'TDATASEG_T', 'KK_FLAG': u'N', 'AMOUNT_TYPE': u'MONETARY', 'EXPORT_TO_TARGET_FLAG': u'Y', 'DATA_TABLE_NAME': 'TDATASEG', 'DIMNAME_LIST': [u'ACCOUNT', u'ENTITY', u'ICP', u'UD1', u'UD2', u'UD3', u'UD4'], 'TDATAMAPTYPE': 'ERP', 'LAST_UPDATED_BY': u'admin', 'DIMNAME_MAP': {u'UD3': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT5', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD3', 'DIMENSION_ID': 9L}, u'ICP': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'ICP', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT7', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ICP', 'DIMENSION_ID': 8L}, u'ENTITY': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Entity', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT1', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ENTITY', 'DIMENSION_ID': 12L}, u'UD2': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT4', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD2', 'DIMENSION_ID': 11L}, u'UD4': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT6', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD4', 'DIMENSION_ID': 1L}, u'ACCOUNT': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Account', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT3', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ACCOUNT', 'DIMENSION_ID': 10L}, u'UD1': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT2', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD1', 'DIMENSION_ID': 7L}}, 'TARGET_APPLICATION_TYPE': u'HFM', 'PARTITIONKEY': 1L, 'PARTVALGROUP': u'[NONE]', 'LEDGER_GROUP_CODE': u'SINGLE', 'INCLUDE_ZERO_BALANCE_FLAG': u'N', 'EXECUTION_MODE': u'SNAPSHOT', 'PLAN_NUMBER': 0L, 'PS_LEDGER': None, 'BALANCE_SELECTION': u'FUNCTIONAL', 'BALANCE_AMOUNT_IS': u'PERIODIC', 'RULE_ID': 1L, 'BALANCE_AMOUNT_BS': u'YTD', 'CURRENCY_CODE': None, 'SOURCE_ADAPTER_FLAG': u'N', 'BALANCE_METHOD_CODE': u'STANDARD', 'SIGNAGE_METHOD': u'SAME', 'WEB_SERVICE_URL': u'http://localhost:9000/aif', 'DATA_LOAD_METHOD': u'CLASSIC_VIA_EPMI'}
    2013-11-08 11:49:14,407 DEBUG [AIF]: CommData.getRuleInfo - END
    2013-11-08 11:49:14,407 DEBUG [AIF]: CommData.getPovList - START
    2013-11-08 11:49:14,407 DEBUG [AIF]:
            SELECT PARTITIONKEY
            ,PARTNAME
            ,CATKEY
            ,CATNAME
            ,PERIODKEY
            ,COALESCE(PERIODDESC, TO_CHAR(PERIODKEY,'YYYY-MM-DD HH24:MI:SS')) PERIODDESC
            ,RULE_ID
            ,RULE_NAME
            FROM (
              SELECT DISTINCT brl.PARTITIONKEY
              ,part.PARTNAME
              ,brl.CATKEY
              ,cat.CATNAME
              ,pprd.PERIODKEY
              ,pp.PERIODDESC
              ,brl.RULE_ID
              ,br.RULE_NAME
              FROM AIF_BAL_RULE_LOADS brl
              INNER JOIN AIF_BALANCE_RULES br
                ON br.RULE_ID = brl.RULE_ID
              INNER JOIN TPOVPARTITION part
                ON part.PARTITIONKEY = brl.PARTITIONKEY
              INNER JOIN TPOVCATEGORY cat
                ON cat.CATKEY = brl.CATKEY
              INNER JOIN AIF_PROCESS_PERIODS pprd
                ON pprd.PROCESS_ID = brl.LOADID
              LEFT OUTER JOIN TPOVPERIOD pp
                ON pp.PERIODKEY = pprd.PERIODKEY
              WHERE brl.LOADID = 22
            ) q
            ORDER BY PARTITIONKEY
            ,CATKEY
            ,PERIODKEY
            ,RULE_ID
    2013-11-08 11:49:14,409 DEBUG [AIF]: CommData.getPovList - END
    2013-11-08 11:49:14,409 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,409 DEBUG [AIF]:
        SELECT tlp.PROCESSSTATUS
        ,tlps.PROCESSSTATUSDESC
        ,CASE WHEN (tlp.INTLOCKSTATE = 60) THEN 'Y' ELSE 'N' END LOCK_FLAG
        FROM TLOGPROCESS tlp
        ,TLOGPROCESSSTATES tlps
        WHERE tlp.PARTITIONKEY = 1
        AND tlp.CATKEY = 1
        AND tlp.PERIODKEY = '2012-01-01'
        AND tlp.RULE_ID = 1
        AND tlps.PROCESSSTATUSKEY = tlp.PROCESSSTATUS
    2013-11-08 11:49:14,410 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 30
              ,PROCESSENTLOAD = 0
              ,PROCESSENTVAL = 0
              ,PROCESSENTLOADNOTE = NULL
              ,PROCESSENTVALNOTE = NULL
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,411 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,412 DEBUG [AIF]:
        SELECT COALESCE(usr.PROFILE_OPTION_VALUE, app.PROFILE_OPTION_VALUE, site.PROFILE_OPTION_VALUE) PROFILE_OPTION_VALUE
        FROM AIF_PROFILE_OPTIONS po
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES site
          ON site.PROFILE_OPTION_NAME = po.PROFILE_OPTION_NAME
          AND site.LEVEL_ID = 1000
          AND site.LEVEL_VALUE = 0
          AND site.LEVEL_ID <= po.MAX_LEVEL_ID
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES app
          ON app.PROFILE_OPTION_NAME = site.PROFILE_OPTION_NAME
          AND app.LEVEL_ID = 1005
          AND app.LEVEL_VALUE = NULL
          AND app.LEVEL_ID <= po.MAX_LEVEL_ID
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES usr
          ON usr.PROFILE_OPTION_NAME = usr.PROFILE_OPTION_NAME
          AND usr.LEVEL_ID = 1010
          AND usr.LEVEL_VALUE = NULL
          AND usr.LEVEL_ID <= po.MAX_LEVEL_ID
        WHERE po.PROFILE_OPTION_NAME = 'JAVA_HOME'
    2013-11-08 11:49:14,413 DEBUG [AIF]: HFM Load command:
    %EPM_ORACLE_HOME%/products/FinancialDataQuality/bin/HFM_LOAD.vbs "22" "a9E3uvSJNhkFhEQTmuUFFUElfdQgKJKHrb1EsjRsL6yZJlXsOFcVPbGWHhpOQzl9zvHoo3s%2Bdq6R4yJhp0GMNWIKMTcizp3%2F8HASkA7rVufXDWEpAKAK%2BvcFmj4zLTI3rtrKHlVEYrOLMY453J2lXk6Cy771mNSD8X114CqaWSdUKGbKTRGNpgE3BfRGlEd1wZ3cra4ee0jUbT2aTaiqSN26oVe6dyxM3zolc%2BOPkjiDNk1MqwNr43tT3JsZz4qEQGF9d39DRN3CDjUuZRPt4SEKSSL35upncRJiw2uBOtV%2FvSuGLNpZ2J79v1%2Ba1Oo9c4Xhig7SFYbE6Jwk1yXRJLTSw0DKFu%2FEpcdjpOnx%2F6YawMBNIa5iu5L637S91jT1Xd3EGmxZFq%2Bi6bHdCJAC8g%3D%3D" "C%3A%5COracle%5CMiddleware%5Cuser_projects%5Cepmsystem1" "%25EPM_ORACLE_HOME%25%2F..%2Fjdk160_35"
    Is there anywhere we need to set EPM Home and also let me know what is PSU1 ?
    Thanks in advance.
    Praneeth

  • Synchronizing Updates on a Staging Table

    Please help me out with the resolving the following issue:
    A load script is running for moving records from a data file to a staging table.
    After this script completes, there is a code to update two fields of the staging table.
    To do this the shell script runs a script (generate_ranges.sql). It takes a parameter of 5000000. It creates ranges based on this passed in number upto the total number of rows in the staging table. So say the staging table has 65,000,000 rows.
    This script will create a file that looks like the following (when 5000000 is passed in):
    1 | 5000000
    5000001 | 10000000
    10000001 | 15000000
    15000001 | 20000000
    20000001 | 25000000
    25000001 | 30000000
    30000001 | 35000000
    35000001 | 40000000
    40000001 | 45000000
    45000001 | 50000000
    50000001 | 55000000
    55000001 | 60000000
    60000001 | 65000000
    The script goes on to read the data file for each row and it calls a shell script and passes in each range. So in this case there are 13 ranges. What is happening is there are 13 seperate updates on the staging table happening in the background.
    The first update rows 1 - 5000000, the second rows 5000001 - 10000000 etc.
    So there are 13 updates happenng behind the scenes.
    The problem is that there is no way for the script to know that all updates are completed successfully before proceeding automatically. Right now I manually check to see that that all updates completed and then I restart the script at the next step. However we want to code to ensure that all the updates are done automatically and then move on in the script. So we need a way to count the number of candidate updates ( right now 13 but could be 14 or more in future) and know that all "x" updates completed, it may be the case that update (1-5000000) is taking 30 mins and the next update ( 5000001 - 10000000) is taking 35 mins, all updates iare running parallely, and only when after the 13 parallel updates are complete, the script can proceed with subsequent steps.
    So please help me out with fixing this problem programmatically.
    Thanks for your cooperation in advance.
    Regards,
    Ayan.

    Ayan,
    Are you really sure you want to update 65 million rows ?
    Alternative: create table as select <columns with 2 columns updated> from staging table;
    While using this approach, you probably don't need to split the update.
    Regards,
    Rob.

  • How to move data from a staging table to three entity tables #2

    Environment: SQL Server 2008 R2
    I have a few questions:
    How would I prevent duplicate records, when/ IF SSIS is executed many times?
    How would I know that all huge volume of data being loaded in the entity tables?
    In reference to "how to move data from a staging table to three entity tables ", since I am loading large volume of data, while using lookup transformation:
    which of the merge components is best suited.
    How to configure merge component correctly. (screen shot is preferred) 
    Please refer to the following link
    http://social.msdn.microsoft.com/Forums/en-US/5f2128c8-3ddd-4455-9076-05fa1902a62a/how-to-move-data-from-a-staging-table-to-three-entity-tables?forum=sqlintegrationservices

    You can use RowCount transformation in the path where you want to capture record details. Then inside rowcount transformation pass a integer variable to get count value inside
    the event handler can be configured as below
    Inside Execute SQL task add INSERT statement to add rowcount to your audit table
    Can you also show me how to Check against destination table using key columns inside a lookup task and insert only non
    matched records (No Match output)
    This is explained clearly in below link which Arthur posted
    http://www.sqlis.com/sqlis/post/Get-all-from-Table-A-that-isnt-in-Table-B.aspx
    For large data I would prefer doing this in T-SQL. So what you could do is dump data to staging table and then apply
    T-SQL MERGE between tables (or even a combination of INSERT/UPDATE statements)
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Sliding window sanario in PTF vs Availability of recently loaded data in the staging table for reporting purpose

    Hello everybody, I am a SQL server DBA and I am planning to implement table partitioning on some of our large tables in our data warehouse. I
    am thinking to design it using the sliding window scenario. I do have one concern though; I think the staging tables we use for new data loading and for switching out the old partition are going to be non-partitioned, right?? Well, I don't have an issue with
    the second staging table that is used for switching out the old partition. My concern is on the first staging table that we use it for switch in purpose, since this table is non-partitioned and holding the new data, HOW ARE WE GOING TO USE/access THIS DATA
    FOR REPORTING PURPOSE before we switch in to our target partitioned table????? say, this staging table is holding a one month worth of data and we will be switching it at the end of the month. Correct me if I am wrong okay, one way I can think of accessing
    this non-portioned staging table is by creating views, which we don’t want to change our codes.
    Do you guys share us your thoughts, experiences???
    We really appreciate your help.

    Hi BG516,
    According to your description, you need to implement table partitioning on some of our large tables in our data warehouse, the problem is that you need the partition table only hold a month data, please correct me if I have anything misunderstanding.
    In this case, you can create non-partitioned table, import the records which age is more than one month into the new created table. Leave the records which age is less than one month on the table in your data warehouse Then you need to create job to
    copy the data from partition table into non-partitioned table at the last day of each month. In this case, the partition table only contain the data for current month. Please refer to the link below to see the details.
    http://blog.sqlauthority.com/2007/08/15/sql-server-insert-data-from-one-table-to-another-table-insert-into-select-select-into-table/
    https://msdn.microsoft.com/en-us/library/ms190268.aspx?f=255&MSPPError=-2147217396
    If this is not what you want, please provide us more information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Creation of staging table - quickest way.

    Hi,
    I need to create a staging table with roughly 370 fields. The sources' specs are not clear and some of the staging table's fields, too. So effectively, first of all I need to understand the staging fields and the source fields. The end user has been given some target date with some generic guess-work (not by me).
    I would like to know the best approach to complete the activity quicker - at least, document all the staging fields and source fields. What I have now is one Excel file with four columns - Staging Field, Source table and field, Staging Field Type and Staging Field Length. Out of the total 370 staging fields, I have just completed 50% and the target date is approaching faster - I have not touched developing Oracle package yet.
    This could be a generic question, but in order to meet deadline, could you please throw some light on the quickest way/tips to complete the mapping documentation (or creating the staging table)?
    Thanks in advance,
    Manoj.

    MDixit wrote:
    Hi,
    I need to create a staging table with roughly 370 fields. The sources' specs are not clear and some of the staging table's fields, too. So effectively, first of all I need to understand the staging fields and the source fields. The end user has been given some target date with some generic guess-work (not by me).
    I would like to know the best approach to complete the activity quicker - at least, document all the staging fields and source fields. What I have now is one Excel file with four columns - Staging Field, Source table and field, Staging Field Type and Staging Field Length. Out of the total 370 staging fields, I have just completed 50% and the target date is approaching faster - I have not touched developing Oracle package yet.
    This could be a generic question, but in order to meet deadline, could you please throw some light on the quickest way/tips to complete the mapping documentation (or creating the staging table)?
    Thanks in advance,
    Manoj.The first thing that comes to mind is the unclear specifications. (actually the first thing that came to my mind was that tables have columns not fields, but anyway......)
    you need to clarify what is coming from the source system before you will be able to intelligently map that to your target system. you may have to push back on your client and tell them that the deadline can't be met unless they provide more information about their source system.
    how have they chosen those 370 columns? how did they know these needed to be part of whatever process you are completing? if they know that these need to be moved to a target system then they should be able to tell you why.

  • How can I INSERT INTO from Staging Table to Production Table

    I’ve got a Bulk Load process which works fine, but I’m having major problems downstream.
    Almost everything is Varchar(100), and this works fine. 
    Except for these fields:
    INDEX SHARES, INDEX MARKET CAP, INDEX WEIGHT, DAILY PRICE RETURN, and DAILY TOTAL RETURN
    These four fields must be some kind of numeric, because I need to perform sums on these guys.
    Here’s my SQL:
    CREATE
    TABLE [dbo].[S&P_Global_BMI_(US_Dollar)]
    [CHANGE]     
    VARCHAR(100),
    [EFFECTIVE DATE]  
    VARCHAR(100),
    [COMPANY]  
    VARCHAR(100),
    [RIC]  
    VARCHAR(100),
    Etc.
    [INDEX SHARES]
    NUMERIC(18, 12),
    [INDEX MARKET CAP]
    NUMERIC(18, 12),
    [INDEX WEIGHT]
    NUMERIC(18, 12),
    [DAILY PRICE RETURN]
    NUMERIC(18, 12),
    [DAILY TOTAL RETURN]
    NUMERIC(18, 12),
    From the main staging table, I’m writing data to 4 production tables.
    CREATE
    TABLE [dbo].[S&P_Global_Ex-U.S._LargeMidCap_(US_Dollar)]
    [CHANGE]     
    VARCHAR(100),
    [EFFECTIVE DATE]  
    VARCHAR(100),
    [COMPANY]  
    VARCHAR(100),
    [RIC]  
    VARCHAR(100),
    Etc.
    [INDEX SHARES]
    FLOAT(20),
    [INDEX MARKET CAP]
    FLOAT(20),
    [INDEX WEIGHT] FLOAT(20),
    [DAILY PRICE RETURN]
    FLOAT(20),
    [DAILY TOTAL RETURN]
    FLOAT(20),,
    INSERT
    INTO [dbo].[S&P_Global_Ex-U.S._LargeMidCap_(US_Dollar)]
      SELECT
    [CHANGE],
    Etc.
    [DAILY TOTAL RETURN]
      FROM
    [dbo].[S&P_Global_BMI_(US_Dollar)]
      WHERE
    isnumeric([Effective Date])
    = 1
      AND
    [CHANGE] is
    null
      AND
    [COUNTRY] <>
    'US'
      AND ([SIZE] =
    'L' OR [SIZE]
    = 'M')
    The Bulk Load is throwing errors like this (unless I make everything Varchar):
    Bulk load data conversion error (truncation) for row 7, column 23 (INDEX SHARES).
    Msg 4863, Level 16, State 1, Line 1
    When I try to load data from the staging table to the production table, I get this.
    Msg 8115, Level 16, State 8, Line 1
    Arithmetic overflow error converting varchar to data type numeric.
    The statement has been terminated.
    There must be an easy way to overcome this, right.
    Please advise!
    Thanks!!
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    Nothing is returned.  Everything is VARCHAR(100).  the problem is this.
    If I use FLOAT(18) or REAL, I get exponential numbers, which is useless to me.
    If I use DECIMAL(18,12) or NUMERIC(18,12), I get errors. 
    Msg 4863, Level 16, State 1, Line 41
    Bulk load data conversion error (truncation) for row 7, column 23 (INDEX SHARES).
    Msg 4863, Level 16, State 1, Line 41
    Bulk load data conversion error (truncation) for row 8, column 23 (INDEX SHARES).
    Msg 4863, Level 16, State 1, Line 41
    Bulk load data conversion error (truncation) for row 9, column 23 (INDEX SHARES).
    There must be some data type that fits this!
    Here's a sample of what I'm dealing with.
    -0.900900901
    9.302325581
    -2.648171501
    -1.402805723
    -2.911830584
    -2.220960866
    2.897762349
    -0.219640074
    -5.458448607
    -0.076626094
    6.710940231
    0.287200186
    0.131682908
    0.124276221
    0.790818723
    0.420505119
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • How to skip a row to be inserted in staging table

    Hi Everyone,
    Actually i am transforming data from a source table to staging table and then staging to final table. I have generated a primary key using sequence. As i put the insert method of staging table as truncate/insert. So every time when mapping is loaded, staging table will be truncated and new data is inserted but as i am using sequence in staging table, it will give new numbering to old data from source table and it will duplicated this data into final target table. So for this reason i am using key look up on some of input attributes and than using expression i am trying to avoid duplication. In each output attributes in expression, I am putting the case statement
    boldCASE WHEN INGRP1.ROW_ID IS NULL
    THEN
    INGRP1.ID
    END*bold*
    Due to this condition i am getting error
    bold
    Warning
    ORA-01400: cannot insert NULL into ("SCOTT"."STG_TARGET_TABLE"."ROW_ID")
    bold
    But i am stuck when the value of row_id is null, at this what condition or statement should i write to skip the insertion of the data. I want to insert data only when ROW_ID IS NULL.
    Kindly help me out.
    Thanks
    Regards
    Suhail Dayer

    You don't need the tables to be identical to use MINUS, only the "Select List" must match. Assuming you have the same Business key (one or more columns that uniquely identifies a row from your source data) in both the source and final table you can do the following:
    - Use a Set Operation where the result is the Business Key of the Staging table MINUS the Business Key of the final table
    - The output of the Set Operation is then joined to the Staging table to get the rest of the attributes for these rows
    - The output of the Join is inserted into the final table
    This will make sure only rows with new Business Keys are loaded.
    Hope this helps,
    Roald

  • How to create staging Tables with synonyms

    Hi All,
    I have a package which runs and updates a table weekly.
    My question is,the web service is hitting on this table(TABLE CC) all the time.When my script runs it will be at least 20 minutes.Hence there would be a gap and the web service is unable to hit anything on the table as the previous data will be deleted and loaded with new data.
    I have heard of staging tables and synonyms but I hope someone could share how do I start about it.
    1)Rename CCto CC_1;
    2)Create synonym CC for CC_1;
    3)Run the scripts to load the data into a new table called CC_2;
    4)Drop the synonym CC
    5)Create synonym CC for CC_2;
    6)check in select * from all_synonyms to see whether the synonyms are pointing to which table.
    What about my scripts that would update the table CC.Do I need to change it to CC1 or CC2 all the time??Or can it be done dynamically.Stuck and don't knwo where to start.
    Tha!

    Why exactly do you need to do all this and what type(s) of DML are done on the table? In other words, WHY do all the rows need to be deleted and why does the table need to be loaded with completely new rows?
    "What-cha driving at dude?"
    Is it possible to just insert, delete or update rows in place based on whatever new information you've loaded into a staging table? Because, if you have say 100,000 rows, and you need to delete 20,000, update 5,000 and insert 15,000, you could just do it all live even if it takes 20 minutes--and I think that could likely be improved upon with bulk collections and bulk binding in you're PL/SQL.
    OTOH, if your table contains last weeks data and you need to load this weeks data, I'd suggest partitioning by range (1 week at a time). Then you can merely load the new week's data and afterwards drop the prior week's data partition. If you do all the inserts on a single commit, no user will ever notice the difference until they see this week's data instead of last week's.
    HTH

  • Staging Table Challenge in PL SQL Proc

    Hi Guys,
    Need your idea for my below requirement.
    I have two DB's and daily we'll sync Data from DB1 to DB2 (similar structures) using some plain SQL queries as below
    Data synching to only few tables that too only to few columns on some conditions (and we are using nearly 25 temp tables to copy data) but we were unable to track the data what is getting updated dailiy..?
    But we badly need to track the data that is getting updated daily, so please suggest me how could I do this...?
    Staging Tables..? Note: (we can't maintain staging tables for all the temp tables and every now and then we'll change the DB tables structure also.)
    Or else is there any other way to achieve this....?
    At the end we should have the data that got updated each day and reports on that data.
    Please help me.
    Cheers,
    Naresh

    Naresh wrote:
    Hi Guys,
    Need your idea for my below requirement.
    I have two DB's and daily we'll sync Data from DB1 to DB2 (similar structures) using some plain SQL queries as below
    Data synching to only few tables that too only to few columns on some conditions (and we are using nearly 25 temp tables to copy data) but we were unable to track the data what is getting updated dailiy..?
    But we badly need to track the data that is getting updated daily, so please suggest me how could I do this...?
    Staging Tables..? Note: (we can't maintain staging tables for all the temp tables and every now and then we'll change the DB tables structure also.)
    Or else is there any other way to achieve this....?
    At the end we should have the data that got updated each day and reports on that data.
    Please help me.
    Cheers,
    NareshChange Data Capture

  • Usage of more number of Staging tables

    Hi,
    We use more staging tables for our processes in DWH project. Hence, it increases the total number
    of tables in our schema. Will it put more stress on Oracle Catalog/Meta-data management, please
    suggest, what is the optimal number of tables to be maintained in a schema? Thank you.
    Regards,
    DR

    what is the optimal number of tables to be maintained in a schema?As few as possible, but no more than necessary.

  • Staging table for PDH

    Hi,
    Anyone please tell me what are the staging tables for PDH.
    Actually I am using ODI to put data from eBS to PDH.
    So where I will put the data.
    How this can be achieved.
    Thanks

    Some of the staging tables are:
    1) Items -- mtl_system_items_interface
    2) Item Categories -- mtl_item_categories_interface
    3) Cross Reference -- MTL_CROSS_REFERENCES_INTERFACE
    4) Item Revision Interface -- MTL_ITEM_REVISIONS_INTERFACE
    5) Item Relation Interface -- MTL_RELATED_ITEMS_INTERFACE
    6) Item Catalog Group Interface -- MTL_ITEM_CAT_GRPS_INTERFACE
    7) Attribute Group Interface -- EGO_ATTR_GROUPS_INTERFACE
    8) Item UDA Interface -- EGO_ITM_USR_ATTR_INTRFC
    9) EGO_ITEM_ASSOCIATIONS_INTF
    10) EGO_ITEM_PEOPLE_INTF
    There are some more you can get all that by querying the database.
    Hope it helps.
    Thanks,
    Priyanshu

  • Staging Table

    hi,
    i am asked a strange question by an interviewer(at least for me).
    there are 3 tables say A,B & C. every table has a column userid. i want to select all data from user id's of A and B to a staging table excluding user id's of C.
    what does it mean? what is a staging table ?
    please help me out...or at least give some links for reference.
    thnks & rgds,
    akp

    Hi,
    i want to select all data from user
    id's of A and B to a staging table excluding user
    id's of C.
    if A & B have same table definition:
    here the SQL
    CREAT TABLE aggregate_abc as
    /*+ APPEND */
    select * from a
    where a.id not in (select c.id from c)
    union
    select * from b
    where b.id not in (select c.id from c);
    If A & B have different table definition:
    CREAT TABLE aggregate_abc as
    /*+ APPEND */
    select * from a, b
    where a.id not in (select c.id from c)
    and b.id not in (select c.id from c)
    this is just a start, a 1st draft approach, use simulate the SQL prior to using the DDL CTS. This approach depends on row rounts jof a, b, c.... just a conceptual 1st draft of logic & SQL

  • Source table = Staging table = Cube in a single mapping?

    I want extract data from some source tables and load a staging table. Then, using the staging table as a source, I want to load a cube.
    I have tried doing all that in a single mapping, with the staging table operator in the middle of the mapping.
    Apparently, it does not work. Only the second part of the mapping is generated, that is, the merge statement that loads the cube using the staging table as the source.
    Of course, I can build two mappings, and execute one after the other.
    My question is: is the first approach feasible? Can I somehow load a staging table, and then use that as the source to load a cube, all in a single mapping?
    Best regards
    Juan Algaba

    Hi,
    doing all this in one mapping is bad very design - but possible. What says the runtime audit browser? How do you know that only the second part of the mapping is was executed?
    Regards,
    Detlef

Maybe you are looking for

  • Migrating portal from 10g to 11g

    Hi, We are in the process of upgrading oracle mid tier from 10gAS to OFMW 11g and also migrating it from Solaris to Linux. With regard the portal migration from OAS 10g to OFMW 11g 1) Is it possible to migrate portal objects from 10g (solaris) to a n

  • Another slow MacBook Pro. PLEASE HELP.

    I have a Macbook pro 2010. It is slow. I did an Etrecheck and pasted something that was in another forum into my Terminal. This is my Etrecheck Hardware Information:           MacBook Pro (15-inch, Mid 2010)           MacBook Pro - model: MacBookPro6

  • How to validate the values inputted in Salestext view at MM02 before saving

    Hello All, Does anyone know what FM or user exit to use in order to validate the values inputted in Salestext view at MM02 transaction before saving it. Regards, Alfred

  • IPod Update Failed - Touch is now frozen - Help!

    I opened a brand new 8GB iPod Touch today and it seemed to work fine. Then it seemed like apps that I bought and downloaded were not working correctly. Every time I clicked on an app it would go right back to the home screen. So I decided to update t

  • Best router for Wireless connections to IPC

    Hello everybody, We are kind new to IPC on our company and we have some problems with the wireless connection and Informatica. Currently we have a Linksys WRT54 and WRT54GL  wireless routers, but those routers wont work fine, after a few minutes of u