Usage of more number of Staging tables

Hi,
We use more staging tables for our processes in DWH project. Hence, it increases the total number
of tables in our schema. Will it put more stress on Oracle Catalog/Meta-data management, please
suggest, what is the optimal number of tables to be maintained in a schema? Thank you.
Regards,
DR

what is the optimal number of tables to be maintained in a schema?As few as possible, but no more than necessary.

Similar Messages

  • What is staging table in BW?

    Hi Experts,
    Can you tell me what is staging table in BW. What does "staging" means? What is the definiction?definition? Where can i get more information about staging table ?
    Thanks very much for your input.

    HI,
    Staging Table is nothing but the PSA Table.Here data will be stored in same as source system format(Raw data).These are the Transparent tables Two dimentional.Staging means before processing the data we will bring the data and store it in the same source system format.By this we can avoid disturbing the OLTP Syatem,because daily transactions are taking place in the OLTP syatem.In BI 7.0 PSA is mandatory.If data volume is very small in that case we can avoid PSA.
    Tarak

  • Synchronizing Updates on a Staging Table

    Please help me out with the resolving the following issue:
    A load script is running for moving records from a data file to a staging table.
    After this script completes, there is a code to update two fields of the staging table.
    To do this the shell script runs a script (generate_ranges.sql). It takes a parameter of 5000000. It creates ranges based on this passed in number upto the total number of rows in the staging table. So say the staging table has 65,000,000 rows.
    This script will create a file that looks like the following (when 5000000 is passed in):
    1 | 5000000
    5000001 | 10000000
    10000001 | 15000000
    15000001 | 20000000
    20000001 | 25000000
    25000001 | 30000000
    30000001 | 35000000
    35000001 | 40000000
    40000001 | 45000000
    45000001 | 50000000
    50000001 | 55000000
    55000001 | 60000000
    60000001 | 65000000
    The script goes on to read the data file for each row and it calls a shell script and passes in each range. So in this case there are 13 ranges. What is happening is there are 13 seperate updates on the staging table happening in the background.
    The first update rows 1 - 5000000, the second rows 5000001 - 10000000 etc.
    So there are 13 updates happenng behind the scenes.
    The problem is that there is no way for the script to know that all updates are completed successfully before proceeding automatically. Right now I manually check to see that that all updates completed and then I restart the script at the next step. However we want to code to ensure that all the updates are done automatically and then move on in the script. So we need a way to count the number of candidate updates ( right now 13 but could be 14 or more in future) and know that all "x" updates completed, it may be the case that update (1-5000000) is taking 30 mins and the next update ( 5000001 - 10000000) is taking 35 mins, all updates iare running parallely, and only when after the 13 parallel updates are complete, the script can proceed with subsequent steps.
    So please help me out with fixing this problem programmatically.
    Thanks for your cooperation in advance.
    Regards,
    Ayan.

    Ayan,
    Are you really sure you want to update 65 million rows ?
    Alternative: create table as select <columns with 2 columns updated> from staging table;
    While using this approach, you probably don't need to split the update.
    Regards,
    Rob.

  • How to Compare Data length of staging table with base table definition

    Hi,
    I've two tables :staging table and base table.
    I'm getting data from flatfiles into staging table, as per requirement structure of staging table and base table(length of each and every column in staging table is 25% more to dump data without any errors) are different for ex :if we've city column with varchar length 40 in staging table it has 25 in base table.Once data is dumped into staging table I want to compare actual data length of each and every column in staging table with definition of base table(data_length for each and every column from all_tab_columns) and if any column differs length I need to update the corresponding row in staging table which also has a flag called err_length.
    so for this I'm using cursor c1 is select length(a.id),length(a.name)... from staging_table;
    cursor c2(name varchar2) is select data_length from all_tab_columns where table_name='BASE_TABLE' and column_name=name;
    But we're getting data atonce in first query whereas in second cursor I need to get each and every column and then compare with first ?
    Can anyone tell me how to get desired results?
    Thanks,
    Mahender.

    This is a shot in the dark but, take a look at this example below:
    SQL> DROP TABLE STAGING;
    Table dropped.
    SQL> DROP TABLE BASE;
    Table dropped.
    SQL> CREATE TABLE STAGING
      2  (
      3          ID              NUMBER
      4  ,       A               VARCHAR2(40)
      5  ,       B               VARCHAR2(40)
      6  ,       ERR_LENGTH      VARCHAR2(1)
      7  );
    Table created.
    SQL> CREATE TABLE BASE
      2  (
      3          ID      NUMBER
      4  ,       A       VARCHAR2(25)
      5  ,       B       VARCHAR2(25)
      6  );
    Table created.
    SQL> INSERT INTO STAGING VALUES (1,RPAD('X',26,'X'),RPAD('X',25,'X'),NULL);
    1 row created.
    SQL> INSERT INTO STAGING VALUES (2,RPAD('X',25,'X'),RPAD('X',26,'X'),NULL);
    1 row created.
    SQL> INSERT INTO STAGING VALUES (3,RPAD('X',25,'X'),RPAD('X',25,'X'),NULL);
    1 row created.
    SQL> COMMIT;
    Commit complete.
    SQL> SELECT * FROM STAGING;
            ID A                                        B                                        E
             1 XXXXXXXXXXXXXXXXXXXXXXXXXX               XXXXXXXXXXXXXXXXXXXXXXXXX
             2 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXX
             3 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXX
    SQL> UPDATE  STAGING ST
      2  SET     ERR_LENGTH = 'Y'
      3  WHERE   EXISTS
      4          (
      5                  WITH    columns_in_staging AS
      6                  (
      7                          /* Retrieve all the columns names for the staging table with the exception of the primary key column
      8                           * and order them alphabetically.
      9                           */
    10                          SELECT  COLUMN_NAME
    11                          ,       ROW_NUMBER() OVER (ORDER BY COLUMN_NAME) RN
    12                          FROM    ALL_TAB_COLUMNS
    13                          WHERE   TABLE_NAME='STAGING'
    14                          AND     COLUMN_NAME != 'ID'
    15                          ORDER BY 1
    16                  ),      staging_unpivot AS
    17                  (
    18                          /* Using the columns_in_staging above UNPIVOT the result set so you get a record for each COLUMN value
    19                           * for each record. The DECODE performs the unpivot and it works if the decode specifies the columns
    20                           * in the same order as the ROW_NUMBER() function in columns_in_staging
    21                           */
    22                          SELECT  ID
    23                          ,       COLUMN_NAME
    24                          ,       DECODE
    25                                  (
    26                                          RN
    27                                  ,       1,A
    28                                  ,       2,B
    29                                  )  AS VAL
    30                          FROM            STAGING
    31                          CROSS JOIN      COLUMNS_IN_STAGING
    32                  )
    33                  /*      Only return IDs for records that have at least one column value that exceeds the length. */
    34                  SELECT  ID
    35                  FROM
    36                  (
    37                          /* Join the unpivoted staging table to the ALL_TAB_COLUMNS table on the column names. Here we perform
    38                           * the check to see if there are any differences in the length if so set a flag.
    39                           */
    40                          SELECT  STAGING_UNPIVOT.ID
    41                          ,       (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_A
    42                          ,       (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_B
    43                          FROM    STAGING_UNPIVOT
    44                          JOIN    ALL_TAB_COLUMNS ATC     ON ATC.COLUMN_NAME = STAGING_UNPIVOT.COLUMN_NAME
    45                          WHERE   ATC.TABLE_NAME='BASE'
    46                  )       A
    47                  WHERE   COALESCE(ERR_LENGTH_A,ERR_LENGTH_B) IS NOT NULL
    48                  AND     ST.ID = A.ID
    49          )
    50  /
    2 rows updated.
    SQL> SELECT * FROM STAGING;
            ID A                                        B                                        E
             1 XXXXXXXXXXXXXXXXXXXXXXXXXX               XXXXXXXXXXXXXXXXXXXXXXXXX                Y
             2 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXX               Y
             3 XXXXXXXXXXXXXXXXXXXXXXXXX                XXXXXXXXXXXXXXXXXXXXXXXXXHopefully the comments make sense. If you have any questions please let me know.
    This assumes the column names are the same between the staging and base tables. In addition as you add more columns to this table you'll have to add more CASE statements to check the length and update the COALESCE check as necessary.
    Thanks!

  • Problem during  Data Warehouse Loading (from staging table to Cube)

    Hi All,
    I have created a staging Module in owb to load my flat files to my staging tables.I have created an Warehouse module to load my staging tables to Dimension and Cube that I have created.
    My senario:
    I have a temp_table_transaction which had loaded my flat files to it .This table had loaded with 168,271,269 milion record as through this flat file.
    I have created a mapping in owb which loaded my temp_table_transaction which has join with other tables and some expression and convert function that these numbers fill to a new table called stg_tbl_transaction in my staging module.Running this mapping takes 3 hours and 45 minutes with this configue of my mapping:
    Default operation mode in running parameter of Mapp config=Set based
    My dimesion filled correctly but I have two problem when I want to transfer my staging table to my Cube:
    #1 Problem:
    i have created a cube is called transaction_cube with owb and it generated and deployed correctly.
    i have created a map to fill my cube with 168,271,268 milon recodes in staging table was called stg_tbl_transaction and deployed it to server (my cube map operating mode is set based)
    but after running this map it did not complete after 9 hour and I forced to cancel my running's map by kill its sessions .I want to know this time for loading this capacity of data is acceptable or for this capacity of data we should spend more time.Please let me know if anybody has any Issue.
    #2 Problem
    To test my map I have created a map with configure set based in operation modes and select my stg_tbl_transaction as source with 168,271,268 records in it and I have created another table to transfer and load my data in it.I wanted to test the time we should spend on this simple map but after 5 hours my data had not loaded in new table.I want to know where is my problem.Should I have set something in configue of map or anothe things.Please guide me about these problems.
    CONFIGURATION OF MY SERVER:
    i run owb on two socket xeon 5500 series with 192 GB ram and disks with RAID 10 Array
    Regards,
    Sahar

    For all of you
    It is possible to load from Infoset to Cube we did it, and it was ok.
    Data are really loaded from Infoset (Cube + master dat) to cube.
    When you create a transformation under a cube Infoset is proposed, and it works fine ....
    Now the process is no more operationnal and i don't understand why .....
    Load from infoset to cube is possible, i can send you screen shot if you want ....
    Christophe

  • Unable to load data from FDMEE staging table to HFM tables

    Hi,
    We have installed EPM 11.1.2.3 with all latest related products (ODI/FDMEE) in our development environment
    We are inprocess of loading data from EBS R12 to HFM using ERPI (Data Managemwnt in EPM 11.1.2.3). We could Import and validate the data but when we try to Export the data to HFM, the process continuously running hours (neither it gives any error nor it completes).
    When we check the process details in ODI work repository, following processes are completed successfully
    COMM_LOAD_BALANCES - Running .........(From past 1 day, still running)
    EBS_GL_LOAD_BALANCES_DATA - Successful
    COMM_LOAD_BALANCES - Successful
    Whereas we could load data in staging table of FDMEE database schema and moreover we are even able to drill through to source system (EBS R12) from Data load workbench but not able to load the data into HFM application.
    Log details from the process are below.
    2013-11-05 17:04:59,747 INFO  [AIF]: FDMEE Process Start, Process ID: 31
    2013-11-05 17:04:59,747 INFO  [AIF]: FDMEE Logging Level: 4
    2013-11-05 17:04:59,748 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_31.log
    2013-11-05 17:04:59,748 INFO  [AIF]: User:admin
    2013-11-05 17:04:59,748 INFO  [AIF]: Location:VisionLoc (Partitionkey:1)
    2013-11-05 17:04:59,749 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-05 17:04:59,749 INFO  [AIF]: Category Name:VisionCat (Category key:3)
    2013-11-05 17:04:59,749 INFO  [AIF]: Rule Name:VisionRule (Rule ID:2)
    2013-11-05 17:05:00,844 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-05 17:05:00,844 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-05 17:05:02,910 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-05 17:05:02,953 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-05 17:05:03,030 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-05 17:05:03,108 INFO  [AIF]:
    Move Data for Period 'JAN'
    Any help on above is much appreciated.
    Thank you
    Regards
    Praneeth

    Hi,
    I have followed the steps 1 & 2 above. Noe the log shows something like below
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Process Start, Process ID: 22
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Logging Level: 4
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_22.log
    2013-11-05 09:47:31,180 INFO  [AIF]: User:admin
    2013-11-05 09:47:31,180 INFO  [AIF]: Location:VisionLoc (Partitionkey:1)
    2013-11-05 09:47:31,180 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-05 09:47:31,180 INFO  [AIF]: Category Name:VisionCat (Category key:3)
    2013-11-05 09:47:31,181 INFO  [AIF]: Rule Name:VisionRule (Rule ID:2)
    2013-11-05 09:47:32,378 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-05 09:47:32,378 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-05 09:47:34,652 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-05 09:47:34,698 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-05 09:47:34,744 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-05 09:47:34,828 INFO  [AIF]:
    Move Data for Period 'JAN'
    2013-11-08 11:49:10,478 INFO  [AIF]: FDMEE Process Start, Process ID: 22
    2013-11-08 11:49:10,493 INFO  [AIF]: FDMEE Logging Level: 5
    2013-11-08 11:49:10,493 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_22.log
    2013-11-08 11:49:10,494 INFO  [AIF]: User:admin
    2013-11-08 11:49:10,494 INFO  [AIF]: Location:VISIONLOC (Partitionkey:1)
    2013-11-08 11:49:10,494 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-08 11:49:10,495 INFO  [AIF]: Category Name:VISIONCAT (Category key:1)
    2013-11-08 11:49:10,495 INFO  [AIF]: Rule Name:VISIONRULE (Rule ID:1)
    2013-11-08 11:49:11,903 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-08 11:49:11,909 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-08 11:49:14,037 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-08 11:49:14,105 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-08 11:49:14,152 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-08 11:49:14,178 DEBUG [AIF]: CommData.exportData - START
    2013-11-08 11:49:14,183 DEBUG [AIF]: CommData.getRuleInfo - START
    2013-11-08 11:49:14,188 DEBUG [AIF]:
            SELECT brl.RULE_ID
            ,br.RULE_NAME
            ,brl.PARTITIONKEY
            ,brl.CATKEY
            ,part.PARTVALGROUP
            ,br.SOURCE_SYSTEM_ID
            ,ss.SOURCE_SYSTEM_TYPE
            ,CASE
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'EBS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'PS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FUSION%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FILE%' THEN 'N'
              ELSE 'Y'
            END SOURCE_ADAPTER_FLAG
            ,app.APPLICATION_ID
            ,app.TARGET_APPLICATION_NAME
            ,app.TARGET_APPLICATION_TYPE
            ,app.DATA_LOAD_METHOD
            ,brl.PLAN_TYPE
            ,CASE brl.PLAN_TYPE
              WHEN 'PLAN1' THEN 1
              WHEN 'PLAN2' THEN 2
              WHEN 'PLAN3' THEN 3
              WHEN 'PLAN4' THEN 4
              WHEN 'PLAN5' THEN 5
              ELSE 0
            END PLAN_NUMBER
            ,br.INCL_ZERO_BALANCE_FLAG
            ,br.PERIOD_MAPPING_TYPE
            ,br.INCLUDE_ADJ_PERIODS_FLAG
            ,br.BALANCE_TYPE ACTUAL_FLAG
            ,br.AMOUNT_TYPE
            ,br.BALANCE_SELECTION
            ,br.BALANCE_METHOD_CODE
            ,COALESCE(br.SIGNAGE_METHOD, 'ABSOLUTE') SIGNAGE_METHOD
            ,br.CURRENCY_CODE
            ,br.BAL_SEG_VALUE_OPTION_CODE
            ,brl.EXECUTION_MODE
            ,COALESCE(brl.IMPORT_FROM_SOURCE_FLAG, 'Y') IMPORT_FROM_SOURCE_FLAG
            ,COALESCE(brl.RECALCULATE_FLAG, 'N') RECALCULATE_FLAG
            ,COALESCE(brl.EXPORT_TO_TARGET_FLAG, 'N') EXPORT_TO_TARGET_FLAG
            ,CASE
              WHEN (br.LEDGER_GROUP_ID IS NOT NULL) THEN 'MULTI'
              WHEN (br.SOURCE_LEDGER_ID IS NOT NULL) THEN 'SINGLE'
              ELSE 'NONE'
            END LEDGER_GROUP_CODE
            ,COALESCE(br.BALANCE_AMOUNT_BS, 'YTD') BALANCE_AMOUNT_BS
            ,COALESCE(br.BALANCE_AMOUNT_IS, 'PERIODIC') BALANCE_AMOUNT_IS
            ,br.LEDGER_GROUP
            ,(SELECT brd.DETAIL_CODE
              FROM AIF_BAL_RULE_DETAILS brd
              WHERE brd.RULE_ID = br.RULE_ID
              AND brd.DETAIL_TYPE = 'LEDGER'       
            ) PS_LEDGER
            ,CASE lg.LEDGER_TEMPLATE
              WHEN 'COMMITMENT' THEN 'Y'
              ELSE 'N'
            END KK_FLAG
            ,p.LAST_UPDATED_BY
            ,p.AIF_WEB_SERVICE_URL WEB_SERVICE_URL
            ,p.EPM_ORACLE_INSTANCE
            FROM AIF_PROCESSES p
            INNER JOIN AIF_BAL_RULE_LOADS brl
              ON brl.LOADID = p.PROCESS_ID
            INNER JOIN AIF_BALANCE_RULES br
              ON br.RULE_ID = brl.RULE_ID
            INNER JOIN AIF_SOURCE_SYSTEMS ss
              ON ss.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
            INNER JOIN AIF_TARGET_APPLICATIONS app
              ON app.APPLICATION_ID = brl.APPLICATION_ID
            INNER JOIN TPOVPARTITION part
              ON part.PARTITIONKEY = br.PARTITIONKEY
            INNER JOIN TBHVIMPGROUP imp
              ON imp.IMPGROUPKEY = part.PARTIMPGROUP
            LEFT OUTER JOIN AIF_COA_LEDGERS l
              ON l.SOURCE_SYSTEM_ID = p.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = COALESCE(br.SOURCE_LEDGER_ID,imp.IMPSOURCELEDGERID)
            LEFT OUTER JOIN AIF_PS_SET_CNTRL_REC_STG scr
              ON scr.SOURCE_SYSTEM_ID = l.SOURCE_SYSTEM_ID
              AND scr.SETCNTRLVALUE = l.SOURCE_LEDGER_NAME
              AND scr.RECNAME = 'LED_GRP_TBL'
            LEFT OUTER JOIN AIF_PS_LED_GRP_TBL_STG lg
              ON lg.SOURCE_SYSTEM_ID = scr.SOURCE_SYSTEM_ID
              AND lg.SETID = scr.SETID
              AND lg.LEDGER_GROUP = br.LEDGER_GROUP
            WHERE p.PROCESS_ID = 22
    2013-11-08 11:49:14,195 DEBUG [AIF]:
          SELECT adim.BALANCE_COLUMN_NAME DIMNAME
          ,adim.DIMENSION_ID
          ,dim.TARGET_DIMENSION_CLASS_NAME
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID1
          ) COA_SEGMENT_NAME1
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID2
          ) COA_SEGMENT_NAME2
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID3
          ) COA_SEGMENT_NAME3
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID4
          ) COA_SEGMENT_NAME4
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID5
          ) COA_SEGMENT_NAME5
          ,(SELECT CASE mdd.ORPHAN_OPTION_CODE
              WHEN 'CHILD' THEN 'N'
              WHEN 'ROOT' THEN 'N'
              ELSE 'Y'
            END DIMENSION_FILTER_FLAG
            FROM AIF_MAP_DIM_DETAILS_V mdd
            ,AIF_MAPPING_RULES mr
            WHERE mr.PARTITIONKEY = tpp.PARTITIONKEY
            AND mdd.RULE_ID = mr.RULE_ID
            AND mdd.DIMENSION_ID = adim.DIMENSION_ID
          ) DIMENSION_FILTER_FLAG
          ,tiie.IMPCONCATCHAR
          FROM TPOVPARTITION tpp
          INNER JOIN AIF_TARGET_APPL_DIMENSIONS adim
            ON adim.APPLICATION_ID = 2
          INNER JOIN AIF_DIMENSIONS dim
            ON dim.DIMENSION_ID = adim.DIMENSION_ID
          LEFT OUTER JOIN TBHVIMPITEMERPI tiie
            ON tiie.IMPGROUPKEY = tpp.PARTIMPGROUP
            AND tiie.IMPFLDFIELDNAME = adim.BALANCE_COLUMN_NAME
            AND tiie.IMPMAPTYPE = 'ERP'
          WHERE tpp.PARTITIONKEY = 1
          AND adim.BALANCE_COLUMN_NAME IS NOT NULL
          ORDER BY adim.BALANCE_COLUMN_NAME
    2013-11-08 11:49:14,197 DEBUG [AIF]: {'APPLICATION_ID': 2L, 'IMPORT_FROM_SOURCE_FLAG': u'N', 'PLAN_TYPE': None, 'RULE_NAME': u'VISIONRULE', 'ACTUAL_FLAG': u'A', 'IS_INCREMENTAL_LOAD': False, 'EPM_ORACLE_INSTANCE': u'C:\\Oracle\\Middleware\\user_projects\\epmsystem1', 'CATKEY': 1L, 'BAL_SEG_VALUE_OPTION_CODE': u'ALL', 'INCLUDE_ADJ_PERIODS_FLAG': u'N', 'PERIOD_MAPPING_TYPE': u'EXPLICIT', 'SOURCE_SYSTEM_TYPE': u'EBS_R12', 'LEDGER_GROUP': None, 'TARGET_APPLICATION_NAME': u'OASIS', 'RECALCULATE_FLAG': u'N', 'SOURCE_SYSTEM_ID': 2L, 'TEMP_DATA_TABLE_NAME': 'TDATASEG_T', 'KK_FLAG': u'N', 'AMOUNT_TYPE': u'MONETARY', 'EXPORT_TO_TARGET_FLAG': u'Y', 'DATA_TABLE_NAME': 'TDATASEG', 'DIMNAME_LIST': [u'ACCOUNT', u'ENTITY', u'ICP', u'UD1', u'UD2', u'UD3', u'UD4'], 'TDATAMAPTYPE': 'ERP', 'LAST_UPDATED_BY': u'admin', 'DIMNAME_MAP': {u'UD3': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT5', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD3', 'DIMENSION_ID': 9L}, u'ICP': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'ICP', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT7', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ICP', 'DIMENSION_ID': 8L}, u'ENTITY': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Entity', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT1', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ENTITY', 'DIMENSION_ID': 12L}, u'UD2': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT4', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD2', 'DIMENSION_ID': 11L}, u'UD4': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT6', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD4', 'DIMENSION_ID': 1L}, u'ACCOUNT': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Account', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT3', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ACCOUNT', 'DIMENSION_ID': 10L}, u'UD1': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT2', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD1', 'DIMENSION_ID': 7L}}, 'TARGET_APPLICATION_TYPE': u'HFM', 'PARTITIONKEY': 1L, 'PARTVALGROUP': u'[NONE]', 'LEDGER_GROUP_CODE': u'SINGLE', 'INCLUDE_ZERO_BALANCE_FLAG': u'N', 'EXECUTION_MODE': u'SNAPSHOT', 'PLAN_NUMBER': 0L, 'PS_LEDGER': None, 'BALANCE_SELECTION': u'FUNCTIONAL', 'BALANCE_AMOUNT_IS': u'PERIODIC', 'RULE_ID': 1L, 'BALANCE_AMOUNT_BS': u'YTD', 'CURRENCY_CODE': None, 'SOURCE_ADAPTER_FLAG': u'N', 'BALANCE_METHOD_CODE': u'STANDARD', 'SIGNAGE_METHOD': u'SAME', 'WEB_SERVICE_URL': u'http://localhost:9000/aif', 'DATA_LOAD_METHOD': u'CLASSIC_VIA_EPMI'}
    2013-11-08 11:49:14,197 DEBUG [AIF]: CommData.getRuleInfo - END
    2013-11-08 11:49:14,224 DEBUG [AIF]: CommData.insertPeriods - START
    2013-11-08 11:49:14,228 DEBUG [AIF]: CommData.getLedgerListAndMap - START
    2013-11-08 11:49:14,229 DEBUG [AIF]: CommData.getLedgerSQL - START
    2013-11-08 11:49:14,229 DEBUG [AIF]: CommData.getLedgerSQL - END
    2013-11-08 11:49:14,229 DEBUG [AIF]:
              SELECT l.SOURCE_LEDGER_ID
              ,l.SOURCE_LEDGER_NAME
              ,l.SOURCE_COA_ID
              ,l.CALENDAR_ID
              ,'0' SETID
              ,l.PERIOD_TYPE
              ,NULL LEDGER_TABLE_NAME
              FROM AIF_BALANCE_RULES br
              ,AIF_COA_LEDGERS l
              WHERE br.RULE_ID = 1
              AND l.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = br.SOURCE_LEDGER_ID
    2013-11-08 11:49:14,230 DEBUG [AIF]: CommData.getLedgerListAndMap - END
    2013-11-08 11:49:14,232 DEBUG [AIF]:
            INSERT INTO AIF_PROCESS_PERIODS (
              PROCESS_ID
              ,PERIODKEY
              ,PERIOD_ID
              ,ADJUSTMENT_PERIOD_FLAG
              ,GL_PERIOD_YEAR
              ,GL_PERIOD_NUM
              ,GL_PERIOD_NAME
              ,GL_PERIOD_CODE
              ,GL_EFFECTIVE_PERIOD_NUM
              ,YEARTARGET
              ,PERIODTARGET
              ,IMP_ENTITY_TYPE
              ,IMP_ENTITY_ID
              ,IMP_ENTITY_NAME
              ,TRANS_ENTITY_TYPE
              ,TRANS_ENTITY_ID
              ,TRANS_ENTITY_NAME
              ,PRIOR_PERIOD_FLAG
              ,SOURCE_LEDGER_ID
                    SELECT DISTINCT brl.LOADID PROCESS_ID
                    ,pp.PERIODKEY PERIODKEY
                    ,prd.PERIOD_ID
                    ,COALESCE(prd.ADJUSTMENT_PERIOD_FLAG, 'N') ADJUSTMENT_PERIOD_FLAG
                    ,COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) GL_PERIOD_YEAR
                    ,COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0) GL_PERIOD_NUM
                    ,COALESCE(prd.PERIOD_NAME, ppsrc.GL_PERIOD_NAME,'0') GL_PERIOD_NAME
                    ,COALESCE(prd.PERIOD_CODE, CAST(COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0) AS VARCHAR(38)),'0') GL_PERIOD_CODE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) GL_EFFECTIVE_PERIOD_NUM
                    ,COALESCE(ppa.YEARTARGET, pp.YEARTARGET) YEARTARGET
                    ,COALESCE(ppa.PERIODTARGET, pp.PERIODTARGET) PERIODTARGET
                    ,'PROCESS_BAL_IMP' IMP_ENTITY_TYPE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) IMP_ENTITY_ID
                    ,COALESCE(prd.PERIOD_NAME, ppsrc.GL_PERIOD_NAME,'0') IMP_ENTITY_NAME
                    ,'PROCESS_BAL_TRANS' TRANS_ENTITY_TYPE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) TRANS_ENTITY_ID
                    ,pp.PERIODDESC TRANS_ENTITY_NAME
                    ,'N' PRIOR_PERIOD_FLAG
                    ,2202 SOURCE_LEDGER_ID
                    FROM (
                      AIF_BAL_RULE_LOADS brl
                      INNER JOIN TPOVCATEGORY pc
                        ON pc.CATKEY = brl.CATKEY
                      INNER JOIN TPOVPERIOD_FLAT_V pp
                        ON pp.PERIODFREQ = pc.CATFREQ
                        AND pp.PERIODKEY >= brl.START_PERIODKEY
                        AND pp.PERIODKEY <= brl.END_PERIODKEY
                      LEFT OUTER JOIN TPOVPERIODADAPTOR_FLAT_V ppa
                        ON ppa.PERIODKEY = pp.PERIODKEY
                        AND ppa.PERIODFREQ = pp.PERIODFREQ
                        AND ppa.INTSYSTEMKEY = 'OASIS'
                    INNER JOIN TPOVPERIODSOURCE ppsrc
                      ON ppsrc.PERIODKEY = pp.PERIODKEY
                      AND ppsrc.MAPPING_TYPE = 'EXPLICIT'
                      AND ppsrc.SOURCE_SYSTEM_ID = 2
                      AND ppsrc.CALENDAR_ID IN ('29067')
                    LEFT OUTER JOIN AIF_GL_PERIODS_STG prd
                      ON prd.PERIOD_ID = ppsrc.PERIOD_ID
                      AND prd.SOURCE_SYSTEM_ID = ppsrc.SOURCE_SYSTEM_ID
                      AND prd.CALENDAR_ID = ppsrc.CALENDAR_ID
              AND prd.SETID = '0'
              AND prd.PERIOD_TYPE = '507'
                      AND prd.ADJUSTMENT_PERIOD_FLAG = 'N'
                    WHERE brl.LOADID = 22
                    ORDER BY pp.PERIODKEY
                    ,GL_EFFECTIVE_PERIOD_NUM
    2013-11-08 11:49:14,235 DEBUG [AIF]: CommData.insertPeriods - END
    2013-11-08 11:49:14,240 DEBUG [AIF]: CommData.moveData - START
    2013-11-08 11:49:14,242 DEBUG [AIF]: CommData.getPovList - START
    2013-11-08 11:49:14,242 DEBUG [AIF]:
            SELECT PARTITIONKEY
            ,PARTNAME
            ,CATKEY
            ,CATNAME
            ,PERIODKEY
            ,COALESCE(PERIODDESC, TO_CHAR(PERIODKEY,'YYYY-MM-DD HH24:MI:SS')) PERIODDESC
            ,RULE_ID
            ,RULE_NAME
            FROM (
              SELECT DISTINCT brl.PARTITIONKEY
              ,part.PARTNAME
              ,brl.CATKEY
              ,cat.CATNAME
              ,pprd.PERIODKEY
              ,pp.PERIODDESC
              ,brl.RULE_ID
              ,br.RULE_NAME
              FROM AIF_BAL_RULE_LOADS brl
              INNER JOIN AIF_BALANCE_RULES br
                ON br.RULE_ID = brl.RULE_ID
              INNER JOIN TPOVPARTITION part
                ON part.PARTITIONKEY = brl.PARTITIONKEY
              INNER JOIN TPOVCATEGORY cat
                ON cat.CATKEY = brl.CATKEY
              INNER JOIN AIF_PROCESS_PERIODS pprd
                ON pprd.PROCESS_ID = brl.LOADID
              LEFT OUTER JOIN TPOVPERIOD pp
                ON pp.PERIODKEY = pprd.PERIODKEY
              WHERE brl.LOADID = 22
            ) q
            ORDER BY PARTITIONKEY
            ,CATKEY
            ,PERIODKEY
            ,RULE_ID
    2013-11-08 11:49:14,244 DEBUG [AIF]: CommData.getPovList - END
    2013-11-08 11:49:14,245 INFO  [AIF]:
    Move Data for Period 'JAN'
    2013-11-08 11:49:14,246 DEBUG [AIF]:
              UPDATE TDATASEG
              SET LOADID = 22
              WHERE PARTITIONKEY = 1
              AND CATKEY = 1
              AND RULE_ID = 1
              AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,320 DEBUG [AIF]: Number of Rows updated on TDATASEG: 1842
    2013-11-08 11:49:14,320 DEBUG [AIF]:
            INSERT INTO AIF_APPL_LOAD_AUDIT (
              LOADID
              ,TARGET_APPLICATION_TYPE
              ,TARGET_APPLICATION_NAME
              ,PLAN_TYPE
              ,SOURCE_LEDGER_ID
              ,EPM_YEAR
              ,EPM_PERIOD
              ,SNAPSHOT_FLAG
              ,SEGMENT_FILTER_FLAG
              ,PARTITIONKEY
              ,CATKEY
              ,RULE_ID
              ,PERIODKEY
              ,EXPORT_TO_TARGET_FLAG
            SELECT DISTINCT 22
            ,TARGET_APPLICATION_TYPE
            ,TARGET_APPLICATION_NAME
            ,PLAN_TYPE
            ,SOURCE_LEDGER_ID
            ,EPM_YEAR
            ,EPM_PERIOD
            ,SNAPSHOT_FLAG
            ,SEGMENT_FILTER_FLAG
            ,PARTITIONKEY
            ,CATKEY
            ,RULE_ID
            ,PERIODKEY
            ,'N'
            FROM AIF_APPL_LOAD_AUDIT
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND RULE_ID = 1
            AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,321 DEBUG [AIF]: Number of Rows inserted into AIF_APPL_LOAD_AUDIT: 1
    2013-11-08 11:49:14,322 DEBUG [AIF]:
            INSERT INTO AIF_APPL_LOAD_PRD_AUDIT (
              LOADID
              ,GL_PERIOD_ID
              ,GL_PERIOD_YEAR
              ,DELTA_RUN_ID
              ,PARTITIONKEY
              ,CATKEY
              ,RULE_ID
              ,PERIODKEY
            SELECT DISTINCT 22
            ,GL_PERIOD_ID
            ,GL_PERIOD_YEAR
            ,DELTA_RUN_ID
            ,PARTITIONKEY
            ,CATKEY
            ,RULE_ID
            ,PERIODKEY
            FROM AIF_APPL_LOAD_PRD_AUDIT
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND RULE_ID = 1
            AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,323 DEBUG [AIF]: Number of Rows inserted into AIF_APPL_LOAD_PRD_AUDIT: 1
    2013-11-08 11:49:14,325 DEBUG [AIF]: CommData.moveData - END
    2013-11-08 11:49:14,332 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,332 DEBUG [AIF]:
        SELECT tlp.PROCESSSTATUS
        ,tlps.PROCESSSTATUSDESC
        ,CASE WHEN (tlp.INTLOCKSTATE = 60) THEN 'Y' ELSE 'N' END LOCK_FLAG
        FROM TLOGPROCESS tlp
        ,TLOGPROCESSSTATES tlps
        WHERE tlp.PARTITIONKEY = 1
        AND tlp.CATKEY = 1
        AND tlp.PERIODKEY = '2012-01-01'
        AND tlp.RULE_ID = 1
        AND tlps.PROCESSSTATUSKEY = tlp.PROCESSSTATUS
    2013-11-08 11:49:14,336 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 20
              ,PROCESSEXP = 0
              ,PROCESSENTLOAD = 0
              ,PROCESSENTVAL = 0
              ,PROCESSEXPNOTE = NULL
              ,PROCESSENTLOADNOTE = NULL
              ,PROCESSENTVALNOTE = NULL
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,338 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,339 DEBUG [AIF]: CommData.purgeInvalidRecordsTDATASEG - START
    2013-11-08 11:49:14,341 DEBUG [AIF]:
            DELETE FROM TDATASEG
            WHERE LOADID = 22
              AND (
            PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
            AND VALID_FLAG = 'N'
    2013-11-08 11:49:14,342 DEBUG [AIF]: Number of Rows deleted from TDATASEG: 0
    2013-11-08 11:49:14,342 DEBUG [AIF]: CommData.purgeInvalidRecordsTDATASEG - END
    2013-11-08 11:49:14,344 DEBUG [AIF]: CommData.updateAppLoadAudit - START
    2013-11-08 11:49:14,344 DEBUG [AIF]:
            UPDATE AIF_APPL_LOAD_AUDIT
            SET EXPORT_TO_TARGET_FLAG = 'Y'
            WHERE LOADID = 22
            AND PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY= '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,345 DEBUG [AIF]: Number of Rows updated on AIF_APPL_LOAD_AUDIT: 1
    2013-11-08 11:49:14,345 DEBUG [AIF]: CommData.updateAppLoadAudit - END
    2013-11-08 11:49:14,345 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,346 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 21
              ,PROCESSEXP = 1
              ,PROCESSEXPNOTE = 'Export Successful'
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,347 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,347 DEBUG [AIF]: CommData.exportData - END
    2013-11-08 11:49:14,404 DEBUG [AIF]: HfmData.loadData - START
    2013-11-08 11:49:14,404 DEBUG [AIF]: CommData.getRuleInfo - START
    2013-11-08 11:49:14,404 DEBUG [AIF]:
            SELECT brl.RULE_ID
            ,br.RULE_NAME
            ,brl.PARTITIONKEY
            ,brl.CATKEY
            ,part.PARTVALGROUP
            ,br.SOURCE_SYSTEM_ID
            ,ss.SOURCE_SYSTEM_TYPE
            ,CASE
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'EBS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'PS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FUSION%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FILE%' THEN 'N'
              ELSE 'Y'
            END SOURCE_ADAPTER_FLAG
            ,app.APPLICATION_ID
            ,app.TARGET_APPLICATION_NAME
            ,app.TARGET_APPLICATION_TYPE
            ,app.DATA_LOAD_METHOD
            ,brl.PLAN_TYPE
            ,CASE brl.PLAN_TYPE
              WHEN 'PLAN1' THEN 1
              WHEN 'PLAN2' THEN 2
              WHEN 'PLAN3' THEN 3
              WHEN 'PLAN4' THEN 4
              WHEN 'PLAN5' THEN 5
              ELSE 0
            END PLAN_NUMBER
            ,br.INCL_ZERO_BALANCE_FLAG
            ,br.PERIOD_MAPPING_TYPE
            ,br.INCLUDE_ADJ_PERIODS_FLAG
            ,br.BALANCE_TYPE ACTUAL_FLAG
            ,br.AMOUNT_TYPE
            ,br.BALANCE_SELECTION
            ,br.BALANCE_METHOD_CODE
            ,COALESCE(br.SIGNAGE_METHOD, 'ABSOLUTE') SIGNAGE_METHOD
            ,br.CURRENCY_CODE
            ,br.BAL_SEG_VALUE_OPTION_CODE
            ,brl.EXECUTION_MODE
            ,COALESCE(brl.IMPORT_FROM_SOURCE_FLAG, 'Y') IMPORT_FROM_SOURCE_FLAG
            ,COALESCE(brl.RECALCULATE_FLAG, 'N') RECALCULATE_FLAG
            ,COALESCE(brl.EXPORT_TO_TARGET_FLAG, 'N') EXPORT_TO_TARGET_FLAG
            ,CASE
              WHEN (br.LEDGER_GROUP_ID IS NOT NULL) THEN 'MULTI'
              WHEN (br.SOURCE_LEDGER_ID IS NOT NULL) THEN 'SINGLE'
              ELSE 'NONE'
            END LEDGER_GROUP_CODE
            ,COALESCE(br.BALANCE_AMOUNT_BS, 'YTD') BALANCE_AMOUNT_BS
            ,COALESCE(br.BALANCE_AMOUNT_IS, 'PERIODIC') BALANCE_AMOUNT_IS
            ,br.LEDGER_GROUP
            ,(SELECT brd.DETAIL_CODE
              FROM AIF_BAL_RULE_DETAILS brd
              WHERE brd.RULE_ID = br.RULE_ID
              AND brd.DETAIL_TYPE = 'LEDGER'       
            ) PS_LEDGER
            ,CASE lg.LEDGER_TEMPLATE
              WHEN 'COMMITMENT' THEN 'Y'
              ELSE 'N'
            END KK_FLAG
            ,p.LAST_UPDATED_BY
            ,p.AIF_WEB_SERVICE_URL WEB_SERVICE_URL
            ,p.EPM_ORACLE_INSTANCE
            FROM AIF_PROCESSES p
            INNER JOIN AIF_BAL_RULE_LOADS brl
              ON brl.LOADID = p.PROCESS_ID
            INNER JOIN AIF_BALANCE_RULES br
              ON br.RULE_ID = brl.RULE_ID
            INNER JOIN AIF_SOURCE_SYSTEMS ss
              ON ss.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
            INNER JOIN AIF_TARGET_APPLICATIONS app
              ON app.APPLICATION_ID = brl.APPLICATION_ID
            INNER JOIN TPOVPARTITION part
              ON part.PARTITIONKEY = br.PARTITIONKEY
            INNER JOIN TBHVIMPGROUP imp
              ON imp.IMPGROUPKEY = part.PARTIMPGROUP
            LEFT OUTER JOIN AIF_COA_LEDGERS l
              ON l.SOURCE_SYSTEM_ID = p.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = COALESCE(br.SOURCE_LEDGER_ID,imp.IMPSOURCELEDGERID)
            LEFT OUTER JOIN AIF_PS_SET_CNTRL_REC_STG scr
              ON scr.SOURCE_SYSTEM_ID = l.SOURCE_SYSTEM_ID
              AND scr.SETCNTRLVALUE = l.SOURCE_LEDGER_NAME
              AND scr.RECNAME = 'LED_GRP_TBL'
            LEFT OUTER JOIN AIF_PS_LED_GRP_TBL_STG lg
              ON lg.SOURCE_SYSTEM_ID = scr.SOURCE_SYSTEM_ID
              AND lg.SETID = scr.SETID
              AND lg.LEDGER_GROUP = br.LEDGER_GROUP
            WHERE p.PROCESS_ID = 22
    2013-11-08 11:49:14,406 DEBUG [AIF]:
          SELECT adim.BALANCE_COLUMN_NAME DIMNAME
          ,adim.DIMENSION_ID
          ,dim.TARGET_DIMENSION_CLASS_NAME
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID1
          ) COA_SEGMENT_NAME1
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID2
          ) COA_SEGMENT_NAME2
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID3
          ) COA_SEGMENT_NAME3
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID4
          ) COA_SEGMENT_NAME4
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID5
          ) COA_SEGMENT_NAME5
          ,(SELECT CASE mdd.ORPHAN_OPTION_CODE
              WHEN 'CHILD' THEN 'N'
              WHEN 'ROOT' THEN 'N'
              ELSE 'Y'
            END DIMENSION_FILTER_FLAG
            FROM AIF_MAP_DIM_DETAILS_V mdd
            ,AIF_MAPPING_RULES mr
            WHERE mr.PARTITIONKEY = tpp.PARTITIONKEY
            AND mdd.RULE_ID = mr.RULE_ID
            AND mdd.DIMENSION_ID = adim.DIMENSION_ID
          ) DIMENSION_FILTER_FLAG
          ,tiie.IMPCONCATCHAR
          FROM TPOVPARTITION tpp
          INNER JOIN AIF_TARGET_APPL_DIMENSIONS adim
            ON adim.APPLICATION_ID = 2
          INNER JOIN AIF_DIMENSIONS dim
            ON dim.DIMENSION_ID = adim.DIMENSION_ID
          LEFT OUTER JOIN TBHVIMPITEMERPI tiie
            ON tiie.IMPGROUPKEY = tpp.PARTIMPGROUP
            AND tiie.IMPFLDFIELDNAME = adim.BALANCE_COLUMN_NAME
            AND tiie.IMPMAPTYPE = 'ERP'
          WHERE tpp.PARTITIONKEY = 1
          AND adim.BALANCE_COLUMN_NAME IS NOT NULL
          ORDER BY adim.BALANCE_COLUMN_NAME
    2013-11-08 11:49:14,407 DEBUG [AIF]: {'APPLICATION_ID': 2L, 'IMPORT_FROM_SOURCE_FLAG': u'N', 'PLAN_TYPE': None, 'RULE_NAME': u'VISIONRULE', 'ACTUAL_FLAG': u'A', 'IS_INCREMENTAL_LOAD': False, 'EPM_ORACLE_INSTANCE': u'C:\\Oracle\\Middleware\\user_projects\\epmsystem1', 'CATKEY': 1L, 'BAL_SEG_VALUE_OPTION_CODE': u'ALL', 'INCLUDE_ADJ_PERIODS_FLAG': u'N', 'PERIOD_MAPPING_TYPE': u'EXPLICIT', 'SOURCE_SYSTEM_TYPE': u'EBS_R12', 'LEDGER_GROUP': None, 'TARGET_APPLICATION_NAME': u'OASIS', 'RECALCULATE_FLAG': u'N', 'SOURCE_SYSTEM_ID': 2L, 'TEMP_DATA_TABLE_NAME': 'TDATASEG_T', 'KK_FLAG': u'N', 'AMOUNT_TYPE': u'MONETARY', 'EXPORT_TO_TARGET_FLAG': u'Y', 'DATA_TABLE_NAME': 'TDATASEG', 'DIMNAME_LIST': [u'ACCOUNT', u'ENTITY', u'ICP', u'UD1', u'UD2', u'UD3', u'UD4'], 'TDATAMAPTYPE': 'ERP', 'LAST_UPDATED_BY': u'admin', 'DIMNAME_MAP': {u'UD3': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT5', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD3', 'DIMENSION_ID': 9L}, u'ICP': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'ICP', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT7', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ICP', 'DIMENSION_ID': 8L}, u'ENTITY': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Entity', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT1', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ENTITY', 'DIMENSION_ID': 12L}, u'UD2': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT4', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD2', 'DIMENSION_ID': 11L}, u'UD4': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT6', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD4', 'DIMENSION_ID': 1L}, u'ACCOUNT': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Account', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT3', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ACCOUNT', 'DIMENSION_ID': 10L}, u'UD1': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT2', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD1', 'DIMENSION_ID': 7L}}, 'TARGET_APPLICATION_TYPE': u'HFM', 'PARTITIONKEY': 1L, 'PARTVALGROUP': u'[NONE]', 'LEDGER_GROUP_CODE': u'SINGLE', 'INCLUDE_ZERO_BALANCE_FLAG': u'N', 'EXECUTION_MODE': u'SNAPSHOT', 'PLAN_NUMBER': 0L, 'PS_LEDGER': None, 'BALANCE_SELECTION': u'FUNCTIONAL', 'BALANCE_AMOUNT_IS': u'PERIODIC', 'RULE_ID': 1L, 'BALANCE_AMOUNT_BS': u'YTD', 'CURRENCY_CODE': None, 'SOURCE_ADAPTER_FLAG': u'N', 'BALANCE_METHOD_CODE': u'STANDARD', 'SIGNAGE_METHOD': u'SAME', 'WEB_SERVICE_URL': u'http://localhost:9000/aif', 'DATA_LOAD_METHOD': u'CLASSIC_VIA_EPMI'}
    2013-11-08 11:49:14,407 DEBUG [AIF]: CommData.getRuleInfo - END
    2013-11-08 11:49:14,407 DEBUG [AIF]: CommData.getPovList - START
    2013-11-08 11:49:14,407 DEBUG [AIF]:
            SELECT PARTITIONKEY
            ,PARTNAME
            ,CATKEY
            ,CATNAME
            ,PERIODKEY
            ,COALESCE(PERIODDESC, TO_CHAR(PERIODKEY,'YYYY-MM-DD HH24:MI:SS')) PERIODDESC
            ,RULE_ID
            ,RULE_NAME
            FROM (
              SELECT DISTINCT brl.PARTITIONKEY
              ,part.PARTNAME
              ,brl.CATKEY
              ,cat.CATNAME
              ,pprd.PERIODKEY
              ,pp.PERIODDESC
              ,brl.RULE_ID
              ,br.RULE_NAME
              FROM AIF_BAL_RULE_LOADS brl
              INNER JOIN AIF_BALANCE_RULES br
                ON br.RULE_ID = brl.RULE_ID
              INNER JOIN TPOVPARTITION part
                ON part.PARTITIONKEY = brl.PARTITIONKEY
              INNER JOIN TPOVCATEGORY cat
                ON cat.CATKEY = brl.CATKEY
              INNER JOIN AIF_PROCESS_PERIODS pprd
                ON pprd.PROCESS_ID = brl.LOADID
              LEFT OUTER JOIN TPOVPERIOD pp
                ON pp.PERIODKEY = pprd.PERIODKEY
              WHERE brl.LOADID = 22
            ) q
            ORDER BY PARTITIONKEY
            ,CATKEY
            ,PERIODKEY
            ,RULE_ID
    2013-11-08 11:49:14,409 DEBUG [AIF]: CommData.getPovList - END
    2013-11-08 11:49:14,409 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,409 DEBUG [AIF]:
        SELECT tlp.PROCESSSTATUS
        ,tlps.PROCESSSTATUSDESC
        ,CASE WHEN (tlp.INTLOCKSTATE = 60) THEN 'Y' ELSE 'N' END LOCK_FLAG
        FROM TLOGPROCESS tlp
        ,TLOGPROCESSSTATES tlps
        WHERE tlp.PARTITIONKEY = 1
        AND tlp.CATKEY = 1
        AND tlp.PERIODKEY = '2012-01-01'
        AND tlp.RULE_ID = 1
        AND tlps.PROCESSSTATUSKEY = tlp.PROCESSSTATUS
    2013-11-08 11:49:14,410 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 30
              ,PROCESSENTLOAD = 0
              ,PROCESSENTVAL = 0
              ,PROCESSENTLOADNOTE = NULL
              ,PROCESSENTVALNOTE = NULL
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,411 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,412 DEBUG [AIF]:
        SELECT COALESCE(usr.PROFILE_OPTION_VALUE, app.PROFILE_OPTION_VALUE, site.PROFILE_OPTION_VALUE) PROFILE_OPTION_VALUE
        FROM AIF_PROFILE_OPTIONS po
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES site
          ON site.PROFILE_OPTION_NAME = po.PROFILE_OPTION_NAME
          AND site.LEVEL_ID = 1000
          AND site.LEVEL_VALUE = 0
          AND site.LEVEL_ID <= po.MAX_LEVEL_ID
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES app
          ON app.PROFILE_OPTION_NAME = site.PROFILE_OPTION_NAME
          AND app.LEVEL_ID = 1005
          AND app.LEVEL_VALUE = NULL
          AND app.LEVEL_ID <= po.MAX_LEVEL_ID
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES usr
          ON usr.PROFILE_OPTION_NAME = usr.PROFILE_OPTION_NAME
          AND usr.LEVEL_ID = 1010
          AND usr.LEVEL_VALUE = NULL
          AND usr.LEVEL_ID <= po.MAX_LEVEL_ID
        WHERE po.PROFILE_OPTION_NAME = 'JAVA_HOME'
    2013-11-08 11:49:14,413 DEBUG [AIF]: HFM Load command:
    %EPM_ORACLE_HOME%/products/FinancialDataQuality/bin/HFM_LOAD.vbs "22" "a9E3uvSJNhkFhEQTmuUFFUElfdQgKJKHrb1EsjRsL6yZJlXsOFcVPbGWHhpOQzl9zvHoo3s%2Bdq6R4yJhp0GMNWIKMTcizp3%2F8HASkA7rVufXDWEpAKAK%2BvcFmj4zLTI3rtrKHlVEYrOLMY453J2lXk6Cy771mNSD8X114CqaWSdUKGbKTRGNpgE3BfRGlEd1wZ3cra4ee0jUbT2aTaiqSN26oVe6dyxM3zolc%2BOPkjiDNk1MqwNr43tT3JsZz4qEQGF9d39DRN3CDjUuZRPt4SEKSSL35upncRJiw2uBOtV%2FvSuGLNpZ2J79v1%2Ba1Oo9c4Xhig7SFYbE6Jwk1yXRJLTSw0DKFu%2FEpcdjpOnx%2F6YawMBNIa5iu5L637S91jT1Xd3EGmxZFq%2Bi6bHdCJAC8g%3D%3D" "C%3A%5COracle%5CMiddleware%5Cuser_projects%5Cepmsystem1" "%25EPM_ORACLE_HOME%25%2F..%2Fjdk160_35"
    Is there anywhere we need to set EPM Home and also let me know what is PSU1 ?
    Thanks in advance.
    Praneeth

  • How to load the data from a staging table to interface table

    Hi..
    I have a staging table having these many columns
    invoice_number,invoice_date,vendor_name,vendor_site_code,description,line-amount,line-description,segment1,segment2,segment3,segment4,segment5
    I want to insert data into oracle interface tables
    1st table is ap_invoices_interface which is primary
    and 2nd is ap_invoice_lines_interfaces.
    According to the invoice_id I have to insert the sum of amount in the amount column of primary table
    can anyone plz give the codes .
    any help appreciate

    Hi,
    you need to write pl/sql procedure or package for validiating the data and inserting.
    first u need to know wat r the mandatory colums. and write the code igiving here a simple example
    Create or replace procedure xxstg_po_vendors_int(errbuf out varchar2,retcode out number) IS
    Cursor po_cur IS Select sno,VENDOR_NAME,SUMMARY_FLAG,ENABLED_FLAG From xxstg_po_vendor;
    l_SUMMARY_FLAG Varchar(1);
    l_ENABLED_FLAG varchar(1);
    l_VENDOR_NAME varchar2(240);
    l_err_msg varchar2(240);
    l_flag varchar2(2);
    l_err_flag varchar2(2);
    Begin
    Delete from Ap_suppliers_INT;
    commit;
    for rec_cur in po_cur loop
    l_flag :='A';
    l_err_flag:= 'A';
    Begin
    select summary_flag into l_SUMMARY_FLAG from po_vendors
    where summary_flag = rec_cur.summary_flag;
    Exception
    when others then
    l_summary_flag:= null;
    l_flag:='E';
    l_err_msg:= 'Summary_flag Does not Exist';
    END;
    FND_FILe.PUT_LINE(FND_FILE.LOG,'Inserting data into interface table'||l_flag);
    Begin
    Select enabled_flag into l_enabled_flag from po_vendors
    where enabled_flag = rec_cur.enabled_flag;
    exception
    when others then
    l_enabled_flag:=null;
    l_flag :='E';
    L_err_msg:='Enabled_flag Does not Exist';
    End;
    FND_FILE.PUT_LINE(FND_FILE.log,'Inserting data into interface table'||l_flag);
    FND_FILE.PUT_LINE(FND_FILE.log,'Inserting data into interface table'||l_flag);
    INSERT INTO AP_SUPPLIERS_INT
    ( VENDOR_INTERFACE_ID,VENDOR_NAME,SUMMARY_FLAG,ENABLED_FLAG )
    values(rec_cur.sno,rec_cur.VENDOR_NAME,rec_cur.SUMMARY_FLAG,rec_cur.ENABLED_FLAG);
    l_flag :=null;
    l_err_msg:=null;
    end loop;
    commit;
    end;
    Regards
    Goutham

  • Injecting data into a star schema from a flat staging table

    I'm trying to work out a best approach for getting data from a very flat staging table and then loading it into a star schema - I take a row from a table with for example 50 different attributes about a person and then load these into a host of different tables, including linking tables.
    One of the attibutes in the staging table will be an instruction to either insert the person and their new data, or update a person and some component of their data or maybe even to terminate a persons records.
    I plan to use PL/SQL but I'm not sure on the best approach.
    The staging table data will be loaded every 10 minutes and will contain about 300 updates.
    I'm not sure if I should just select the staging records into a cursor then insert into the various tables?
    Has anyone got any working examples based on a similar experience?
    I can provide a working example if required.

    The database has some elements that make SQL a tad harder to use?
    For example:
    CREATE TABLE staging
    (person_id NUMBER(10) NOT NULL ,
    title VARCHAR2(15) NULL ,
    initials VARCHAR2(5) NULL ,
    forename VARCHAR2(30) NULL ,
    middle_name VARCHAR2(30) NULL ,
    surname VARCHAR2(50) NULL,
    dial_number VARCHAR2(30) NULL,
    Is_Contactable     CHAR(1) NULL);
    INSERT INTO staging
    (person_id, title, initials, forename, middle_name, surname, dial_number)
    VALUES ('12345', 'Mr', 'NULL', 'Joe', NULL, 'Bloggs', '0117512345','Y')
    CREATE TABLE person
    (person_id NUMBER(10) NOT NULL ,
    title VARCHAR2(15) NULL ,
    initials VARCHAR2(5) NULL ,
    forename VARCHAR2(30) NULL ,
    middle_name VARCHAR2(30) NULL ,
    surname VARCHAR2(50) NULL);
    CREATE UNIQUE INDEX XPKPerson ON Person
    (Person_ID ASC);
    ALTER TABLE Person
    ADD CONSTRAINT XPKPerson PRIMARY KEY (Person_ID);
    CREATE TABLE person_comm
    (person_id NUMBER(10) NOT NULL ,
    comm_type_id NUMBER(10) NOT NULL ,
    comm_id NUMBER(10) NOT NULL );
    CREATE UNIQUE INDEX XPKPerson_Comm ON Person_Comm
    (Person_ID ASC,Comm_Type_ID ASC,Comm_ID ASC);
    ALTER TABLE Person_Comm
    ADD CONSTRAINT XPKPerson_Comm PRIMARY KEY (Person_ID,Comm_Type_ID,Comm_ID);
    CREATE TABLE person_comm_preference
    (person_id NUMBER(10) NOT NULL ,
    comm_type_id NUMBER(10) NOT NULL
    Is_Contactable     CHAR(1) NULL);
    CREATE UNIQUE INDEX XPKPerson_Comm_Preference ON Person_Comm_Preference
    (Person_ID ASC,Comm_Type_ID ASC);
    ALTER TABLE Person_Comm_Preference
    ADD CONSTRAINT XPKPerson_Comm_Preference PRIMARY KEY (Person_ID,Comm_Type_ID);
    CREATE TABLE comm_type
    comm_type_id NUMBER(10) NOT NULL ,
    NAME VARCHAR2(25) NULL ,
    description VARCHAR2(100) NULL ,
    comm_table_name VARCHAR2(50) NULL);
    CREATE UNIQUE INDEX XPKComm_Type ON Comm_Type
    (Comm_Type_ID ASC);
    ALTER TABLE Comm_Type
    ADD CONSTRAINT XPKComm_Type PRIMARY KEY (Comm_Type_ID);
    insert into comm_type (comm_type_id, NAME, description, comm_table_name) values ('23456','HOME PHONE','Home Phone Number','PHONE');
    CREATE TABLE phone
    (phone_id NUMBER(10) NOT NULL ,
    dial_number VARCHAR2(30) NULL);
    Take the record from Staging then update:
    'person'
    'Person_Comm_Preference' Based on a comm_type of 'HOME_PHONE'
    'person_comm' Derived from 'Person' and 'Person_Comm_Preference'
    Then update 'Phone' with the number based on a link derived from 'Phone' which is made up of Person_Comm Primary_Key where 'Comm_ID' (part of that composite key)
    relates to the Phone table Primary_Key which is Phone_ID.
    Does you head hurt as much as mine?

  • Sliding window sanario in PTF vs Availability of recently loaded data in the staging table for reporting purpose

    Hello everybody, I am a SQL server DBA and I am planning to implement table partitioning on some of our large tables in our data warehouse. I
    am thinking to design it using the sliding window scenario. I do have one concern though; I think the staging tables we use for new data loading and for switching out the old partition are going to be non-partitioned, right?? Well, I don't have an issue with
    the second staging table that is used for switching out the old partition. My concern is on the first staging table that we use it for switch in purpose, since this table is non-partitioned and holding the new data, HOW ARE WE GOING TO USE/access THIS DATA
    FOR REPORTING PURPOSE before we switch in to our target partitioned table????? say, this staging table is holding a one month worth of data and we will be switching it at the end of the month. Correct me if I am wrong okay, one way I can think of accessing
    this non-portioned staging table is by creating views, which we don’t want to change our codes.
    Do you guys share us your thoughts, experiences???
    We really appreciate your help.

    Hi BG516,
    According to your description, you need to implement table partitioning on some of our large tables in our data warehouse, the problem is that you need the partition table only hold a month data, please correct me if I have anything misunderstanding.
    In this case, you can create non-partitioned table, import the records which age is more than one month into the new created table. Leave the records which age is less than one month on the table in your data warehouse Then you need to create job to
    copy the data from partition table into non-partitioned table at the last day of each month. In this case, the partition table only contain the data for current month. Please refer to the link below to see the details.
    http://blog.sqlauthority.com/2007/08/15/sql-server-insert-data-from-one-table-to-another-table-insert-into-select-select-into-table/
    https://msdn.microsoft.com/en-us/library/ms190268.aspx?f=255&MSPPError=-2147217396
    If this is not what you want, please provide us more information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Creation of staging table - quickest way.

    Hi,
    I need to create a staging table with roughly 370 fields. The sources' specs are not clear and some of the staging table's fields, too. So effectively, first of all I need to understand the staging fields and the source fields. The end user has been given some target date with some generic guess-work (not by me).
    I would like to know the best approach to complete the activity quicker - at least, document all the staging fields and source fields. What I have now is one Excel file with four columns - Staging Field, Source table and field, Staging Field Type and Staging Field Length. Out of the total 370 staging fields, I have just completed 50% and the target date is approaching faster - I have not touched developing Oracle package yet.
    This could be a generic question, but in order to meet deadline, could you please throw some light on the quickest way/tips to complete the mapping documentation (or creating the staging table)?
    Thanks in advance,
    Manoj.

    MDixit wrote:
    Hi,
    I need to create a staging table with roughly 370 fields. The sources' specs are not clear and some of the staging table's fields, too. So effectively, first of all I need to understand the staging fields and the source fields. The end user has been given some target date with some generic guess-work (not by me).
    I would like to know the best approach to complete the activity quicker - at least, document all the staging fields and source fields. What I have now is one Excel file with four columns - Staging Field, Source table and field, Staging Field Type and Staging Field Length. Out of the total 370 staging fields, I have just completed 50% and the target date is approaching faster - I have not touched developing Oracle package yet.
    This could be a generic question, but in order to meet deadline, could you please throw some light on the quickest way/tips to complete the mapping documentation (or creating the staging table)?
    Thanks in advance,
    Manoj.The first thing that comes to mind is the unclear specifications. (actually the first thing that came to my mind was that tables have columns not fields, but anyway......)
    you need to clarify what is coming from the source system before you will be able to intelligently map that to your target system. you may have to push back on your client and tell them that the deadline can't be met unless they provide more information about their source system.
    how have they chosen those 370 columns? how did they know these needed to be part of whatever process you are completing? if they know that these need to be moved to a target system then they should be able to tell you why.

  • Read data from E$ table and insert into staging table

    Hi all,
    I am a new to ODI. I want your help in understanding how to read data from an "E$" table and insert into a staging table.
    Scenario:
    Two columns in a flat file, employee id and employee name, needs to be loaded into a datastore EMP+. A check constraint is added to allow data with employee names in upper case only to be loaded into the datastore. Check control is set to flow and static. Right click on the datastore, select control and then check. The rows that have violated the check constraint are kept in E$_EMP+ table.
    Problem:
    Issue is that I want to read the data from the E$_EMP+ table and transform the employee name into uppercase and move the corrected data from E$_EMP+ into EMP+. Please advise me on how to automatically handle "soft" exceptions in ODI.
    Thank you

    Hi Himanshu,
    I have tried the approach you suggested already. That works. However, the scenario I described was very basic. Based on the error logged into the E$_EMP table, I will have to design the interface to apply more business logic before the data is actually merged into the target table.
    Before the business logic is applied, I need to read each row from the E$EMP table first and I do not know how to read from E$EMP table.

  • How to skip a row to be inserted in staging table

    Hi Everyone,
    Actually i am transforming data from a source table to staging table and then staging to final table. I have generated a primary key using sequence. As i put the insert method of staging table as truncate/insert. So every time when mapping is loaded, staging table will be truncated and new data is inserted but as i am using sequence in staging table, it will give new numbering to old data from source table and it will duplicated this data into final target table. So for this reason i am using key look up on some of input attributes and than using expression i am trying to avoid duplication. In each output attributes in expression, I am putting the case statement
    boldCASE WHEN INGRP1.ROW_ID IS NULL
    THEN
    INGRP1.ID
    END*bold*
    Due to this condition i am getting error
    bold
    Warning
    ORA-01400: cannot insert NULL into ("SCOTT"."STG_TARGET_TABLE"."ROW_ID")
    bold
    But i am stuck when the value of row_id is null, at this what condition or statement should i write to skip the insertion of the data. I want to insert data only when ROW_ID IS NULL.
    Kindly help me out.
    Thanks
    Regards
    Suhail Dayer

    You don't need the tables to be identical to use MINUS, only the "Select List" must match. Assuming you have the same Business key (one or more columns that uniquely identifies a row from your source data) in both the source and final table you can do the following:
    - Use a Set Operation where the result is the Business Key of the Staging table MINUS the Business Key of the final table
    - The output of the Set Operation is then joined to the Staging table to get the rest of the attributes for these rows
    - The output of the Join is inserted into the final table
    This will make sure only rows with new Business Keys are loaded.
    Hope this helps,
    Roald

  • Conditional Insert into staging table by using sqlloader

    Hi,
    In Oracle apps I'm submitting a concurrent program programmatically which will call sqlloader and insert into a staging a table.
    This table consists of 30 columns.Program has one input parameter.If parameter value = REQUIRED Then it should insert into first three columns of staging table.If it's APPROVED then it should insert into first 10 columns of the same table.
    Data file is pipe delimited file which may or may not have all possible values :For Required,I many not have all three column values

    >
    I think you understood the thinks wrongly. OP marked UTL_FILE as the correct answer, which is again a server side solution.
    >
    Perhaps you failed to notice that the answer now marked correct was posted AFTER mine. I'm not clairvoyant and OP did not state whether a server side or client side solution was appropriate.
    I stand by my comments. Using an external table is an alternative to sql loader but, as I said, simply using an external table instead of sql loader doesn't address OPs problem.
    >
    And IMO, external table will be faster than UTL_FILE
    >
    I'd be more concerned with why OP wants to write a procedure to parse flat files, deal with data conversion errors and perform table inserts when Oracle already provides that functionality using either external tables or sql loader.
    I would suggest loading the file into a new staging table that can hold all of the columns that might be contained in the flat file. Then the appropriate data can be moved to OPs staging table according to the required business rules.
    Sounds like we each think OP should reconsider the architecture.

  • More than one fact tables...

    Hi.
    I have tried OLAP until now with only one fact table.
    But now I have more than one. To start i added one more.
    I am always using SOLVED LEVEL...LOWEST LEVEL.
    I am always receiving the following error when creating the cube with this measures:
    "exact fetch returns more than requested number of rows"
    What shall I look for when dealing with more than one fact table?
    Thanks.
    ODDS
    :: ... and still have a very poor performance ...

    1.
    Well ... I saw the global star schema and we have two fact tables there!!!
    Do I have to build different cubes for each fact table always?
    2.
    I have built cubes, created a java client and a jsp client.
    Performance is much better in JSP using the AppServer(sure!).
    The power of the JSP client is more limited i presume.
    I wonder if I can do things such setCellEditing for a crosstab in both.
    3.
    Some aggregation questions:
    Everytime I create a cube using CWM2 and also a AW using AWM wizards with that cube I have one aggregation plan by default that processes everything online.
    After that I create and deploy my own aggregation plan.
    My question is: If I don't want to aggregate anything!??! I want to see, for instance, in BiBeans the lowest level values only. And everything at the top levels empty.
    I am missing something 'cause I still have everything aggregated !!!
    Thanks.
    ODDS

  • More than one Detail table in the Master Detail forms!

    Hi,
    I need to have more than one Detail table in Master Detail Forms.
    I want to see the details of the rows of the first Detail table in the Second Detail table. Please guide me to do this.
    Sincerely yours,
    Mozhdeh

    You can do one of two things
    1) Depends on the nature of your data model. I was able to manage many to many
    relationships using views and instead of triggers. This solution is somewhat complex but can work in certain situations.
    2)Works for situations where master record exists. create a page with mulitple portlets syncronized on some related key.
    create the following package to use on the md forms to place on the one page to be rendered.
    -- this package will facilitate the storage and retrieval of keys used by the related forms.
    create or replace package session_var
    IS
    session_parms portal30.wwsto_api_session;
    g_domain varchar2(2000) := portal30.wwctx_api.get_user;
    function get_id (id in varchar2) return number;
    PROCEDURE SET_ID
    (ID in VARCHAR2
    ,p_val in VARCHAR2
    ,P_URL in VARCHAR2
    END SESSION_VAR;
    create or replace package body session_var
    IS
    --pkg body
    function get_id(id in varchar2) return number is
    l_store_session portal30.wwsto_api_session;
    l_id number;
    begin
    l_store_session := portal30.wwsto_api_session.load_session(
    p_domain => session_var.g_domain,
    p_sub_domain => 'your domain');
    l_id := l_store_session.get_attribute_as_varchar2( p_name => id);
    return l_id;
    end get_id;
    PROCEDURE SET_ID
    (ID in VARCHAR2
    ,p_val in VARCHAR2
    ,P_URL in VARCHAR2
    IS
    l_store_session portal30.wwsto_api_session;
    begin
    l_store_session := portal30.wwsto_api_session.load_session(
    p_domain => session_var.g_domain,
    p_sub_domain => 'your domain');
    l_store_session.set_attribute(
    p_name => id,
    p_value => p_val );
    l_store_session.save_session;
    * Redirect to the page using p_url;
    portal30.wwv_redirect.url(P_URL);
    end set_id;
    END SESSION_VAR;
    --the form master detail form, section  before display page enter the following code and publish as portlet.
    declare
    l_fs varchar2(4000);
    l_s varchar2(4000);
    v_con_id number;
    begin
    v_con_id :=rfq.session_var.get_id('CON_ID'); -- primary key and key used to relate details
    p_session.set_shadow_value( p_block_name => 'MASTER_BLOCK',
    p_attribute_name => 'A_CON_ID', -- attribute on form related to primary key
    p_value => '= '|| v_con_id,
    p_language => portal30.wwctx_api.get_nls_language ,
                   p_index => 1
    l_fs := p_session.get_value_as_varchar2(p_block_name => 'MASTER_BLOCK', p_attribute_name => '_FORM_STATE');
    l_s := p_session.get_value_as_varchar2(p_block_name => 'MASTER_BLOCK', p_attribute_name => '_STATUS');
    if l_fs = 'SAVE' and l_s is null then
    WWV_MASTER_GENSYS_1(p_block_name => p_block_name,
    p_object_name => p_object_name,
    p_instance => p_instance,
    p_event_type => p_event_type,
    p_user_args => p_user_args,
    p_session => p_session);
    p_session.save_session;
    end if;
    if l_fs = 'QUERY_AND_SAVE' and l_s is null then
    WWV_MASTER_GENSYS_1(p_block_name => p_block_name,
    p_object_name => p_object_name,
    p_instance => p_instance,
    p_event_type => p_event_type,
    p_user_args => p_user_args,
    p_session => p_session);
    p_session.save_session;
    end if;
    exception
    when others then
    PORTAL30.wwerr_api_error.add(PORTAL30.wwerr_api_error.DOMAIN_WWV,'app','generic','onLink', p1 => sqlerrm);
    raise;
    end;
    --then create other md forms and publish as portlets in a similar manner.
    -- create a form (form_session_vars) to call procedure session_vars and place the following code in the
    addiontional pl/sql tab
    WWV_GENSYS_1(
    p_block_name => p_block_name ,
    p_object_name => p_object_name,
    p_instance => p_instance ,
    p_event_type => p_event_type ,
    p_user_args => p_user_args ,
    p_session => p_session);
    --then create a page and place the md forms created above as portlets on the page.
    --create a link and target the form_session_vars and in the link target inputs
    enter the values for your user parameters
    id= CON_ID --"your primary key name"
    p_url= url/page/"your_page"
    --finally create a report (QBE or standard).
    in the column formation section use the link created earlier to direct the user to the target page.
    How it works.
    When the link is selected the form_session_Var is called and automatically runs setting the primary key values
    in the user session store. The step is required or the resulting page will not render properly. Then the user is redirected to the page where the portlets are rendered. The portlets start rendering, the before display page calls the session vars package to retrieve the key and put the form into query_update mode returning the data.
    The portlets finish in query_and_save mode with details in update mode, the allowable insert, delete and none actions will be available for the details.
    benefits: The session_Var package code is resuable as well as the form form_session_Var and the link. passing the key name and values are done at the report level and detailed in the report links. the md forms will need to reference their related keys.

Maybe you are looking for