Staging table for PDH

Hi,
Anyone please tell me what are the staging tables for PDH.
Actually I am using ODI to put data from eBS to PDH.
So where I will put the data.
How this can be achieved.
Thanks

Some of the staging tables are:
1) Items -- mtl_system_items_interface
2) Item Categories -- mtl_item_categories_interface
3) Cross Reference -- MTL_CROSS_REFERENCES_INTERFACE
4) Item Revision Interface -- MTL_ITEM_REVISIONS_INTERFACE
5) Item Relation Interface -- MTL_RELATED_ITEMS_INTERFACE
6) Item Catalog Group Interface -- MTL_ITEM_CAT_GRPS_INTERFACE
7) Attribute Group Interface -- EGO_ATTR_GROUPS_INTERFACE
8) Item UDA Interface -- EGO_ITM_USR_ATTR_INTRFC
9) EGO_ITEM_ASSOCIATIONS_INTF
10) EGO_ITEM_PEOPLE_INTF
There are some more you can get all that by querying the database.
Hope it helps.
Thanks,
Priyanshu

Similar Messages

  • Sliding window sanario in PTF vs Availability of recently loaded data in the staging table for reporting purpose

    Hello everybody, I am a SQL server DBA and I am planning to implement table partitioning on some of our large tables in our data warehouse. I
    am thinking to design it using the sliding window scenario. I do have one concern though; I think the staging tables we use for new data loading and for switching out the old partition are going to be non-partitioned, right?? Well, I don't have an issue with
    the second staging table that is used for switching out the old partition. My concern is on the first staging table that we use it for switch in purpose, since this table is non-partitioned and holding the new data, HOW ARE WE GOING TO USE/access THIS DATA
    FOR REPORTING PURPOSE before we switch in to our target partitioned table????? say, this staging table is holding a one month worth of data and we will be switching it at the end of the month. Correct me if I am wrong okay, one way I can think of accessing
    this non-portioned staging table is by creating views, which we don’t want to change our codes.
    Do you guys share us your thoughts, experiences???
    We really appreciate your help.

    Hi BG516,
    According to your description, you need to implement table partitioning on some of our large tables in our data warehouse, the problem is that you need the partition table only hold a month data, please correct me if I have anything misunderstanding.
    In this case, you can create non-partitioned table, import the records which age is more than one month into the new created table. Leave the records which age is less than one month on the table in your data warehouse Then you need to create job to
    copy the data from partition table into non-partitioned table at the last day of each month. In this case, the partition table only contain the data for current month. Please refer to the link below to see the details.
    http://blog.sqlauthority.com/2007/08/15/sql-server-insert-data-from-one-table-to-another-table-insert-into-select-select-into-table/
    https://msdn.microsoft.com/en-us/library/ms190268.aspx?f=255&MSPPError=-2147217396
    If this is not what you want, please provide us more information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Change data capture-staging table

    hi,
    while using change data capture in oracle 11g,r2.I used source table and target table in the mapping.[ both source and target are oracle].
    Do i have to use staging table for the mapping ,instead of target table.??
    if yes, then where and how do i create a staging table and put it in the code template mapping.
    Do i have to enable cdc in property editor for both the source and target tables?

    Could you explain your requirement?
    Where does change take place?

  • Staging Table Challenge in PL SQL Proc

    Hi Guys,
    Need your idea for my below requirement.
    I have two DB's and daily we'll sync Data from DB1 to DB2 (similar structures) using some plain SQL queries as below
    Data synching to only few tables that too only to few columns on some conditions (and we are using nearly 25 temp tables to copy data) but we were unable to track the data what is getting updated dailiy..?
    But we badly need to track the data that is getting updated daily, so please suggest me how could I do this...?
    Staging Tables..? Note: (we can't maintain staging tables for all the temp tables and every now and then we'll change the DB tables structure also.)
    Or else is there any other way to achieve this....?
    At the end we should have the data that got updated each day and reports on that data.
    Please help me.
    Cheers,
    Naresh

    Naresh wrote:
    Hi Guys,
    Need your idea for my below requirement.
    I have two DB's and daily we'll sync Data from DB1 to DB2 (similar structures) using some plain SQL queries as below
    Data synching to only few tables that too only to few columns on some conditions (and we are using nearly 25 temp tables to copy data) but we were unable to track the data what is getting updated dailiy..?
    But we badly need to track the data that is getting updated daily, so please suggest me how could I do this...?
    Staging Tables..? Note: (we can't maintain staging tables for all the temp tables and every now and then we'll change the DB tables structure also.)
    Or else is there any other way to achieve this....?
    At the end we should have the data that got updated each day and reports on that data.
    Please help me.
    Cheers,
    NareshChange Data Capture

  • Usage of more number of Staging tables

    Hi,
    We use more staging tables for our processes in DWH project. Hence, it increases the total number
    of tables in our schema. Will it put more stress on Oracle Catalog/Meta-data management, please
    suggest, what is the optimal number of tables to be maintained in a schema? Thank you.
    Regards,
    DR

    what is the optimal number of tables to be maintained in a schema?As few as possible, but no more than necessary.

  • Archive/Truncate stage table for interface

    Hi All,
    We use quite a few of staging tables for the interface. We need to figure out a way to archive it and then truncate it from time to time. Of course it will be dependent on the volume and frequency of the data into the staging table.
    Could anyone share thoughts, process to get this executed please.
    Thanks for your time!

    A lot of interfaces (such as Items, categories) have a parameter that tells Oracle to delete the data if the record was imported successfully. You should make use of that parameter.
    If the interface does not have such a parameter, then you should write custom scripts that fire after the interface and deletes successful data.
    For errors however, the picture could get complicated.
    A lot depends on how your interface was designed, what kind of error reporting is in place , how you trigger correction and reprocessing of the errors.
    The safest way is to let the source system resend the data but that is not always possible.
    The second alternate is to use Oracle screens (such as Sales Order Corrections) or develop custom ones to let users fix what was wrong and resubmit.
    If all fails, you should let users delete the errored record.
    If you have any regulatory or audit constraints, then you may want to backup successful as well as error records into another table. But that should not be the first choice. A lot of times, the knee-jerk requirement from users is that the data must be stored even if nobody will ever look at it. At that point, it not data - it is junk in your database.
    Hope this helps
    Sandeep Gandhi
    Omkar Technologies Inc.
    Independent Techno-functional Consultant

  • Copy from staging table to multiple tables

    We are using an SSIS package to fast load our data source into a staging table for processing.
    The reason we are using a staging table is that we need to copy the data from staging to our actual DB tables (4 in total), and the insertion order matters as we have foreign key relationships.
    When copying the data from our staging table, should we enumerate through all the records and use an insert-select method for each row or is there a more efficient way to do this?

    Our raw data source is a mdb file and we are using SSIS to fast load into SQL Server, and we are looking to transform the data set into 3 tables (using a stored proc):
    Site (SiteID, Name)
    Gas (ID, Date, Time, GasFIeld1, GasField2....., SiteID)
    GenSet (ID, Date, Time, GenSetField1, GenSetField2.....,
    SiteID)
    Each record in our raw data source contains a Name field which identifies the Site. We only need to add a new site to the Site table if it does not already exist. This is already coded and working using insert-select and NOT EXISTS.
    We now need to iterate over all records and extract a subset of data for the Gas table and extract a subset of data for the GenSet table and link each row with the
    associated SiteID, using Name field.
    The insertion order should be Site table first then remaining tables.
    Are you saying it would be better to transform this data using SSIS and not to use a staging table and stored procedure?
    I would prefer using staging + sp appproach here as that would involve set based logic and would be faster performance wise
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • My requirement is to update 3 valuesets daily based on data coming to my staging table. What is the API used for this and how to map any API to our staging table? I am totally new to oracle and apps. Please help. Thanks!

    My requirement is to update 3 valuesets daily based on data coming to my staging table. What is the API used for this and how to map any API to our staging table? I am totally new to oracle and apps. Please help. Thanks!

    Hi,
    You could use FND_FLEX_LOADER_APIS.UP_VALUE_SET_VALUE to upload them from staging table (I suppose you mean value set values...).
    You can find a sample scripts if you google around.
    What do you mean "how to map any API to our staging table" ?
    You should do at least the following mapping (which column(s) in the staging table will provide these information):
    - the 3 value sets name which you're going to update/upload (I suppose these are existing value sets or which have been already created)
    - the value set values and  description
    Try to start with something and if there is any issues the community could then help... but for the time being with the description of the problem you have provided, that's the best I can do...

  • Problem during  Data Warehouse Loading (from staging table to Cube)

    Hi All,
    I have created a staging Module in owb to load my flat files to my staging tables.I have created an Warehouse module to load my staging tables to Dimension and Cube that I have created.
    My senario:
    I have a temp_table_transaction which had loaded my flat files to it .This table had loaded with 168,271,269 milion record as through this flat file.
    I have created a mapping in owb which loaded my temp_table_transaction which has join with other tables and some expression and convert function that these numbers fill to a new table called stg_tbl_transaction in my staging module.Running this mapping takes 3 hours and 45 minutes with this configue of my mapping:
    Default operation mode in running parameter of Mapp config=Set based
    My dimesion filled correctly but I have two problem when I want to transfer my staging table to my Cube:
    #1 Problem:
    i have created a cube is called transaction_cube with owb and it generated and deployed correctly.
    i have created a map to fill my cube with 168,271,268 milon recodes in staging table was called stg_tbl_transaction and deployed it to server (my cube map operating mode is set based)
    but after running this map it did not complete after 9 hour and I forced to cancel my running's map by kill its sessions .I want to know this time for loading this capacity of data is acceptable or for this capacity of data we should spend more time.Please let me know if anybody has any Issue.
    #2 Problem
    To test my map I have created a map with configure set based in operation modes and select my stg_tbl_transaction as source with 168,271,268 records in it and I have created another table to transfer and load my data in it.I wanted to test the time we should spend on this simple map but after 5 hours my data had not loaded in new table.I want to know where is my problem.Should I have set something in configue of map or anothe things.Please guide me about these problems.
    CONFIGURATION OF MY SERVER:
    i run owb on two socket xeon 5500 series with 192 GB ram and disks with RAID 10 Array
    Regards,
    Sahar

    For all of you
    It is possible to load from Infoset to Cube we did it, and it was ok.
    Data are really loaded from Infoset (Cube + master dat) to cube.
    When you create a transformation under a cube Infoset is proposed, and it works fine ....
    Now the process is no more operationnal and i don't understand why .....
    Load from infoset to cube is possible, i can send you screen shot if you want ....
    Christophe

  • Unable to load data from FDMEE staging table to HFM tables

    Hi,
    We have installed EPM 11.1.2.3 with all latest related products (ODI/FDMEE) in our development environment
    We are inprocess of loading data from EBS R12 to HFM using ERPI (Data Managemwnt in EPM 11.1.2.3). We could Import and validate the data but when we try to Export the data to HFM, the process continuously running hours (neither it gives any error nor it completes).
    When we check the process details in ODI work repository, following processes are completed successfully
    COMM_LOAD_BALANCES - Running .........(From past 1 day, still running)
    EBS_GL_LOAD_BALANCES_DATA - Successful
    COMM_LOAD_BALANCES - Successful
    Whereas we could load data in staging table of FDMEE database schema and moreover we are even able to drill through to source system (EBS R12) from Data load workbench but not able to load the data into HFM application.
    Log details from the process are below.
    2013-11-05 17:04:59,747 INFO  [AIF]: FDMEE Process Start, Process ID: 31
    2013-11-05 17:04:59,747 INFO  [AIF]: FDMEE Logging Level: 4
    2013-11-05 17:04:59,748 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_31.log
    2013-11-05 17:04:59,748 INFO  [AIF]: User:admin
    2013-11-05 17:04:59,748 INFO  [AIF]: Location:VisionLoc (Partitionkey:1)
    2013-11-05 17:04:59,749 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-05 17:04:59,749 INFO  [AIF]: Category Name:VisionCat (Category key:3)
    2013-11-05 17:04:59,749 INFO  [AIF]: Rule Name:VisionRule (Rule ID:2)
    2013-11-05 17:05:00,844 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-05 17:05:00,844 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-05 17:05:02,910 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-05 17:05:02,953 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-05 17:05:03,030 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-05 17:05:03,108 INFO  [AIF]:
    Move Data for Period 'JAN'
    Any help on above is much appreciated.
    Thank you
    Regards
    Praneeth

    Hi,
    I have followed the steps 1 & 2 above. Noe the log shows something like below
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Process Start, Process ID: 22
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Logging Level: 4
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_22.log
    2013-11-05 09:47:31,180 INFO  [AIF]: User:admin
    2013-11-05 09:47:31,180 INFO  [AIF]: Location:VisionLoc (Partitionkey:1)
    2013-11-05 09:47:31,180 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-05 09:47:31,180 INFO  [AIF]: Category Name:VisionCat (Category key:3)
    2013-11-05 09:47:31,181 INFO  [AIF]: Rule Name:VisionRule (Rule ID:2)
    2013-11-05 09:47:32,378 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-05 09:47:32,378 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-05 09:47:34,652 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-05 09:47:34,698 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-05 09:47:34,744 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-05 09:47:34,828 INFO  [AIF]:
    Move Data for Period 'JAN'
    2013-11-08 11:49:10,478 INFO  [AIF]: FDMEE Process Start, Process ID: 22
    2013-11-08 11:49:10,493 INFO  [AIF]: FDMEE Logging Level: 5
    2013-11-08 11:49:10,493 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_22.log
    2013-11-08 11:49:10,494 INFO  [AIF]: User:admin
    2013-11-08 11:49:10,494 INFO  [AIF]: Location:VISIONLOC (Partitionkey:1)
    2013-11-08 11:49:10,494 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-08 11:49:10,495 INFO  [AIF]: Category Name:VISIONCAT (Category key:1)
    2013-11-08 11:49:10,495 INFO  [AIF]: Rule Name:VISIONRULE (Rule ID:1)
    2013-11-08 11:49:11,903 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-08 11:49:11,909 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-08 11:49:14,037 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-08 11:49:14,105 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-08 11:49:14,152 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-08 11:49:14,178 DEBUG [AIF]: CommData.exportData - START
    2013-11-08 11:49:14,183 DEBUG [AIF]: CommData.getRuleInfo - START
    2013-11-08 11:49:14,188 DEBUG [AIF]:
            SELECT brl.RULE_ID
            ,br.RULE_NAME
            ,brl.PARTITIONKEY
            ,brl.CATKEY
            ,part.PARTVALGROUP
            ,br.SOURCE_SYSTEM_ID
            ,ss.SOURCE_SYSTEM_TYPE
            ,CASE
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'EBS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'PS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FUSION%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FILE%' THEN 'N'
              ELSE 'Y'
            END SOURCE_ADAPTER_FLAG
            ,app.APPLICATION_ID
            ,app.TARGET_APPLICATION_NAME
            ,app.TARGET_APPLICATION_TYPE
            ,app.DATA_LOAD_METHOD
            ,brl.PLAN_TYPE
            ,CASE brl.PLAN_TYPE
              WHEN 'PLAN1' THEN 1
              WHEN 'PLAN2' THEN 2
              WHEN 'PLAN3' THEN 3
              WHEN 'PLAN4' THEN 4
              WHEN 'PLAN5' THEN 5
              ELSE 0
            END PLAN_NUMBER
            ,br.INCL_ZERO_BALANCE_FLAG
            ,br.PERIOD_MAPPING_TYPE
            ,br.INCLUDE_ADJ_PERIODS_FLAG
            ,br.BALANCE_TYPE ACTUAL_FLAG
            ,br.AMOUNT_TYPE
            ,br.BALANCE_SELECTION
            ,br.BALANCE_METHOD_CODE
            ,COALESCE(br.SIGNAGE_METHOD, 'ABSOLUTE') SIGNAGE_METHOD
            ,br.CURRENCY_CODE
            ,br.BAL_SEG_VALUE_OPTION_CODE
            ,brl.EXECUTION_MODE
            ,COALESCE(brl.IMPORT_FROM_SOURCE_FLAG, 'Y') IMPORT_FROM_SOURCE_FLAG
            ,COALESCE(brl.RECALCULATE_FLAG, 'N') RECALCULATE_FLAG
            ,COALESCE(brl.EXPORT_TO_TARGET_FLAG, 'N') EXPORT_TO_TARGET_FLAG
            ,CASE
              WHEN (br.LEDGER_GROUP_ID IS NOT NULL) THEN 'MULTI'
              WHEN (br.SOURCE_LEDGER_ID IS NOT NULL) THEN 'SINGLE'
              ELSE 'NONE'
            END LEDGER_GROUP_CODE
            ,COALESCE(br.BALANCE_AMOUNT_BS, 'YTD') BALANCE_AMOUNT_BS
            ,COALESCE(br.BALANCE_AMOUNT_IS, 'PERIODIC') BALANCE_AMOUNT_IS
            ,br.LEDGER_GROUP
            ,(SELECT brd.DETAIL_CODE
              FROM AIF_BAL_RULE_DETAILS brd
              WHERE brd.RULE_ID = br.RULE_ID
              AND brd.DETAIL_TYPE = 'LEDGER'       
            ) PS_LEDGER
            ,CASE lg.LEDGER_TEMPLATE
              WHEN 'COMMITMENT' THEN 'Y'
              ELSE 'N'
            END KK_FLAG
            ,p.LAST_UPDATED_BY
            ,p.AIF_WEB_SERVICE_URL WEB_SERVICE_URL
            ,p.EPM_ORACLE_INSTANCE
            FROM AIF_PROCESSES p
            INNER JOIN AIF_BAL_RULE_LOADS brl
              ON brl.LOADID = p.PROCESS_ID
            INNER JOIN AIF_BALANCE_RULES br
              ON br.RULE_ID = brl.RULE_ID
            INNER JOIN AIF_SOURCE_SYSTEMS ss
              ON ss.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
            INNER JOIN AIF_TARGET_APPLICATIONS app
              ON app.APPLICATION_ID = brl.APPLICATION_ID
            INNER JOIN TPOVPARTITION part
              ON part.PARTITIONKEY = br.PARTITIONKEY
            INNER JOIN TBHVIMPGROUP imp
              ON imp.IMPGROUPKEY = part.PARTIMPGROUP
            LEFT OUTER JOIN AIF_COA_LEDGERS l
              ON l.SOURCE_SYSTEM_ID = p.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = COALESCE(br.SOURCE_LEDGER_ID,imp.IMPSOURCELEDGERID)
            LEFT OUTER JOIN AIF_PS_SET_CNTRL_REC_STG scr
              ON scr.SOURCE_SYSTEM_ID = l.SOURCE_SYSTEM_ID
              AND scr.SETCNTRLVALUE = l.SOURCE_LEDGER_NAME
              AND scr.RECNAME = 'LED_GRP_TBL'
            LEFT OUTER JOIN AIF_PS_LED_GRP_TBL_STG lg
              ON lg.SOURCE_SYSTEM_ID = scr.SOURCE_SYSTEM_ID
              AND lg.SETID = scr.SETID
              AND lg.LEDGER_GROUP = br.LEDGER_GROUP
            WHERE p.PROCESS_ID = 22
    2013-11-08 11:49:14,195 DEBUG [AIF]:
          SELECT adim.BALANCE_COLUMN_NAME DIMNAME
          ,adim.DIMENSION_ID
          ,dim.TARGET_DIMENSION_CLASS_NAME
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID1
          ) COA_SEGMENT_NAME1
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID2
          ) COA_SEGMENT_NAME2
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID3
          ) COA_SEGMENT_NAME3
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID4
          ) COA_SEGMENT_NAME4
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID5
          ) COA_SEGMENT_NAME5
          ,(SELECT CASE mdd.ORPHAN_OPTION_CODE
              WHEN 'CHILD' THEN 'N'
              WHEN 'ROOT' THEN 'N'
              ELSE 'Y'
            END DIMENSION_FILTER_FLAG
            FROM AIF_MAP_DIM_DETAILS_V mdd
            ,AIF_MAPPING_RULES mr
            WHERE mr.PARTITIONKEY = tpp.PARTITIONKEY
            AND mdd.RULE_ID = mr.RULE_ID
            AND mdd.DIMENSION_ID = adim.DIMENSION_ID
          ) DIMENSION_FILTER_FLAG
          ,tiie.IMPCONCATCHAR
          FROM TPOVPARTITION tpp
          INNER JOIN AIF_TARGET_APPL_DIMENSIONS adim
            ON adim.APPLICATION_ID = 2
          INNER JOIN AIF_DIMENSIONS dim
            ON dim.DIMENSION_ID = adim.DIMENSION_ID
          LEFT OUTER JOIN TBHVIMPITEMERPI tiie
            ON tiie.IMPGROUPKEY = tpp.PARTIMPGROUP
            AND tiie.IMPFLDFIELDNAME = adim.BALANCE_COLUMN_NAME
            AND tiie.IMPMAPTYPE = 'ERP'
          WHERE tpp.PARTITIONKEY = 1
          AND adim.BALANCE_COLUMN_NAME IS NOT NULL
          ORDER BY adim.BALANCE_COLUMN_NAME
    2013-11-08 11:49:14,197 DEBUG [AIF]: {'APPLICATION_ID': 2L, 'IMPORT_FROM_SOURCE_FLAG': u'N', 'PLAN_TYPE': None, 'RULE_NAME': u'VISIONRULE', 'ACTUAL_FLAG': u'A', 'IS_INCREMENTAL_LOAD': False, 'EPM_ORACLE_INSTANCE': u'C:\\Oracle\\Middleware\\user_projects\\epmsystem1', 'CATKEY': 1L, 'BAL_SEG_VALUE_OPTION_CODE': u'ALL', 'INCLUDE_ADJ_PERIODS_FLAG': u'N', 'PERIOD_MAPPING_TYPE': u'EXPLICIT', 'SOURCE_SYSTEM_TYPE': u'EBS_R12', 'LEDGER_GROUP': None, 'TARGET_APPLICATION_NAME': u'OASIS', 'RECALCULATE_FLAG': u'N', 'SOURCE_SYSTEM_ID': 2L, 'TEMP_DATA_TABLE_NAME': 'TDATASEG_T', 'KK_FLAG': u'N', 'AMOUNT_TYPE': u'MONETARY', 'EXPORT_TO_TARGET_FLAG': u'Y', 'DATA_TABLE_NAME': 'TDATASEG', 'DIMNAME_LIST': [u'ACCOUNT', u'ENTITY', u'ICP', u'UD1', u'UD2', u'UD3', u'UD4'], 'TDATAMAPTYPE': 'ERP', 'LAST_UPDATED_BY': u'admin', 'DIMNAME_MAP': {u'UD3': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT5', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD3', 'DIMENSION_ID': 9L}, u'ICP': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'ICP', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT7', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ICP', 'DIMENSION_ID': 8L}, u'ENTITY': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Entity', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT1', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ENTITY', 'DIMENSION_ID': 12L}, u'UD2': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT4', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD2', 'DIMENSION_ID': 11L}, u'UD4': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT6', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD4', 'DIMENSION_ID': 1L}, u'ACCOUNT': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Account', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT3', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ACCOUNT', 'DIMENSION_ID': 10L}, u'UD1': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT2', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD1', 'DIMENSION_ID': 7L}}, 'TARGET_APPLICATION_TYPE': u'HFM', 'PARTITIONKEY': 1L, 'PARTVALGROUP': u'[NONE]', 'LEDGER_GROUP_CODE': u'SINGLE', 'INCLUDE_ZERO_BALANCE_FLAG': u'N', 'EXECUTION_MODE': u'SNAPSHOT', 'PLAN_NUMBER': 0L, 'PS_LEDGER': None, 'BALANCE_SELECTION': u'FUNCTIONAL', 'BALANCE_AMOUNT_IS': u'PERIODIC', 'RULE_ID': 1L, 'BALANCE_AMOUNT_BS': u'YTD', 'CURRENCY_CODE': None, 'SOURCE_ADAPTER_FLAG': u'N', 'BALANCE_METHOD_CODE': u'STANDARD', 'SIGNAGE_METHOD': u'SAME', 'WEB_SERVICE_URL': u'http://localhost:9000/aif', 'DATA_LOAD_METHOD': u'CLASSIC_VIA_EPMI'}
    2013-11-08 11:49:14,197 DEBUG [AIF]: CommData.getRuleInfo - END
    2013-11-08 11:49:14,224 DEBUG [AIF]: CommData.insertPeriods - START
    2013-11-08 11:49:14,228 DEBUG [AIF]: CommData.getLedgerListAndMap - START
    2013-11-08 11:49:14,229 DEBUG [AIF]: CommData.getLedgerSQL - START
    2013-11-08 11:49:14,229 DEBUG [AIF]: CommData.getLedgerSQL - END
    2013-11-08 11:49:14,229 DEBUG [AIF]:
              SELECT l.SOURCE_LEDGER_ID
              ,l.SOURCE_LEDGER_NAME
              ,l.SOURCE_COA_ID
              ,l.CALENDAR_ID
              ,'0' SETID
              ,l.PERIOD_TYPE
              ,NULL LEDGER_TABLE_NAME
              FROM AIF_BALANCE_RULES br
              ,AIF_COA_LEDGERS l
              WHERE br.RULE_ID = 1
              AND l.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = br.SOURCE_LEDGER_ID
    2013-11-08 11:49:14,230 DEBUG [AIF]: CommData.getLedgerListAndMap - END
    2013-11-08 11:49:14,232 DEBUG [AIF]:
            INSERT INTO AIF_PROCESS_PERIODS (
              PROCESS_ID
              ,PERIODKEY
              ,PERIOD_ID
              ,ADJUSTMENT_PERIOD_FLAG
              ,GL_PERIOD_YEAR
              ,GL_PERIOD_NUM
              ,GL_PERIOD_NAME
              ,GL_PERIOD_CODE
              ,GL_EFFECTIVE_PERIOD_NUM
              ,YEARTARGET
              ,PERIODTARGET
              ,IMP_ENTITY_TYPE
              ,IMP_ENTITY_ID
              ,IMP_ENTITY_NAME
              ,TRANS_ENTITY_TYPE
              ,TRANS_ENTITY_ID
              ,TRANS_ENTITY_NAME
              ,PRIOR_PERIOD_FLAG
              ,SOURCE_LEDGER_ID
                    SELECT DISTINCT brl.LOADID PROCESS_ID
                    ,pp.PERIODKEY PERIODKEY
                    ,prd.PERIOD_ID
                    ,COALESCE(prd.ADJUSTMENT_PERIOD_FLAG, 'N') ADJUSTMENT_PERIOD_FLAG
                    ,COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) GL_PERIOD_YEAR
                    ,COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0) GL_PERIOD_NUM
                    ,COALESCE(prd.PERIOD_NAME, ppsrc.GL_PERIOD_NAME,'0') GL_PERIOD_NAME
                    ,COALESCE(prd.PERIOD_CODE, CAST(COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0) AS VARCHAR(38)),'0') GL_PERIOD_CODE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) GL_EFFECTIVE_PERIOD_NUM
                    ,COALESCE(ppa.YEARTARGET, pp.YEARTARGET) YEARTARGET
                    ,COALESCE(ppa.PERIODTARGET, pp.PERIODTARGET) PERIODTARGET
                    ,'PROCESS_BAL_IMP' IMP_ENTITY_TYPE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) IMP_ENTITY_ID
                    ,COALESCE(prd.PERIOD_NAME, ppsrc.GL_PERIOD_NAME,'0') IMP_ENTITY_NAME
                    ,'PROCESS_BAL_TRANS' TRANS_ENTITY_TYPE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) TRANS_ENTITY_ID
                    ,pp.PERIODDESC TRANS_ENTITY_NAME
                    ,'N' PRIOR_PERIOD_FLAG
                    ,2202 SOURCE_LEDGER_ID
                    FROM (
                      AIF_BAL_RULE_LOADS brl
                      INNER JOIN TPOVCATEGORY pc
                        ON pc.CATKEY = brl.CATKEY
                      INNER JOIN TPOVPERIOD_FLAT_V pp
                        ON pp.PERIODFREQ = pc.CATFREQ
                        AND pp.PERIODKEY >= brl.START_PERIODKEY
                        AND pp.PERIODKEY <= brl.END_PERIODKEY
                      LEFT OUTER JOIN TPOVPERIODADAPTOR_FLAT_V ppa
                        ON ppa.PERIODKEY = pp.PERIODKEY
                        AND ppa.PERIODFREQ = pp.PERIODFREQ
                        AND ppa.INTSYSTEMKEY = 'OASIS'
                    INNER JOIN TPOVPERIODSOURCE ppsrc
                      ON ppsrc.PERIODKEY = pp.PERIODKEY
                      AND ppsrc.MAPPING_TYPE = 'EXPLICIT'
                      AND ppsrc.SOURCE_SYSTEM_ID = 2
                      AND ppsrc.CALENDAR_ID IN ('29067')
                    LEFT OUTER JOIN AIF_GL_PERIODS_STG prd
                      ON prd.PERIOD_ID = ppsrc.PERIOD_ID
                      AND prd.SOURCE_SYSTEM_ID = ppsrc.SOURCE_SYSTEM_ID
                      AND prd.CALENDAR_ID = ppsrc.CALENDAR_ID
              AND prd.SETID = '0'
              AND prd.PERIOD_TYPE = '507'
                      AND prd.ADJUSTMENT_PERIOD_FLAG = 'N'
                    WHERE brl.LOADID = 22
                    ORDER BY pp.PERIODKEY
                    ,GL_EFFECTIVE_PERIOD_NUM
    2013-11-08 11:49:14,235 DEBUG [AIF]: CommData.insertPeriods - END
    2013-11-08 11:49:14,240 DEBUG [AIF]: CommData.moveData - START
    2013-11-08 11:49:14,242 DEBUG [AIF]: CommData.getPovList - START
    2013-11-08 11:49:14,242 DEBUG [AIF]:
            SELECT PARTITIONKEY
            ,PARTNAME
            ,CATKEY
            ,CATNAME
            ,PERIODKEY
            ,COALESCE(PERIODDESC, TO_CHAR(PERIODKEY,'YYYY-MM-DD HH24:MI:SS')) PERIODDESC
            ,RULE_ID
            ,RULE_NAME
            FROM (
              SELECT DISTINCT brl.PARTITIONKEY
              ,part.PARTNAME
              ,brl.CATKEY
              ,cat.CATNAME
              ,pprd.PERIODKEY
              ,pp.PERIODDESC
              ,brl.RULE_ID
              ,br.RULE_NAME
              FROM AIF_BAL_RULE_LOADS brl
              INNER JOIN AIF_BALANCE_RULES br
                ON br.RULE_ID = brl.RULE_ID
              INNER JOIN TPOVPARTITION part
                ON part.PARTITIONKEY = brl.PARTITIONKEY
              INNER JOIN TPOVCATEGORY cat
                ON cat.CATKEY = brl.CATKEY
              INNER JOIN AIF_PROCESS_PERIODS pprd
                ON pprd.PROCESS_ID = brl.LOADID
              LEFT OUTER JOIN TPOVPERIOD pp
                ON pp.PERIODKEY = pprd.PERIODKEY
              WHERE brl.LOADID = 22
            ) q
            ORDER BY PARTITIONKEY
            ,CATKEY
            ,PERIODKEY
            ,RULE_ID
    2013-11-08 11:49:14,244 DEBUG [AIF]: CommData.getPovList - END
    2013-11-08 11:49:14,245 INFO  [AIF]:
    Move Data for Period 'JAN'
    2013-11-08 11:49:14,246 DEBUG [AIF]:
              UPDATE TDATASEG
              SET LOADID = 22
              WHERE PARTITIONKEY = 1
              AND CATKEY = 1
              AND RULE_ID = 1
              AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,320 DEBUG [AIF]: Number of Rows updated on TDATASEG: 1842
    2013-11-08 11:49:14,320 DEBUG [AIF]:
            INSERT INTO AIF_APPL_LOAD_AUDIT (
              LOADID
              ,TARGET_APPLICATION_TYPE
              ,TARGET_APPLICATION_NAME
              ,PLAN_TYPE
              ,SOURCE_LEDGER_ID
              ,EPM_YEAR
              ,EPM_PERIOD
              ,SNAPSHOT_FLAG
              ,SEGMENT_FILTER_FLAG
              ,PARTITIONKEY
              ,CATKEY
              ,RULE_ID
              ,PERIODKEY
              ,EXPORT_TO_TARGET_FLAG
            SELECT DISTINCT 22
            ,TARGET_APPLICATION_TYPE
            ,TARGET_APPLICATION_NAME
            ,PLAN_TYPE
            ,SOURCE_LEDGER_ID
            ,EPM_YEAR
            ,EPM_PERIOD
            ,SNAPSHOT_FLAG
            ,SEGMENT_FILTER_FLAG
            ,PARTITIONKEY
            ,CATKEY
            ,RULE_ID
            ,PERIODKEY
            ,'N'
            FROM AIF_APPL_LOAD_AUDIT
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND RULE_ID = 1
            AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,321 DEBUG [AIF]: Number of Rows inserted into AIF_APPL_LOAD_AUDIT: 1
    2013-11-08 11:49:14,322 DEBUG [AIF]:
            INSERT INTO AIF_APPL_LOAD_PRD_AUDIT (
              LOADID
              ,GL_PERIOD_ID
              ,GL_PERIOD_YEAR
              ,DELTA_RUN_ID
              ,PARTITIONKEY
              ,CATKEY
              ,RULE_ID
              ,PERIODKEY
            SELECT DISTINCT 22
            ,GL_PERIOD_ID
            ,GL_PERIOD_YEAR
            ,DELTA_RUN_ID
            ,PARTITIONKEY
            ,CATKEY
            ,RULE_ID
            ,PERIODKEY
            FROM AIF_APPL_LOAD_PRD_AUDIT
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND RULE_ID = 1
            AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,323 DEBUG [AIF]: Number of Rows inserted into AIF_APPL_LOAD_PRD_AUDIT: 1
    2013-11-08 11:49:14,325 DEBUG [AIF]: CommData.moveData - END
    2013-11-08 11:49:14,332 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,332 DEBUG [AIF]:
        SELECT tlp.PROCESSSTATUS
        ,tlps.PROCESSSTATUSDESC
        ,CASE WHEN (tlp.INTLOCKSTATE = 60) THEN 'Y' ELSE 'N' END LOCK_FLAG
        FROM TLOGPROCESS tlp
        ,TLOGPROCESSSTATES tlps
        WHERE tlp.PARTITIONKEY = 1
        AND tlp.CATKEY = 1
        AND tlp.PERIODKEY = '2012-01-01'
        AND tlp.RULE_ID = 1
        AND tlps.PROCESSSTATUSKEY = tlp.PROCESSSTATUS
    2013-11-08 11:49:14,336 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 20
              ,PROCESSEXP = 0
              ,PROCESSENTLOAD = 0
              ,PROCESSENTVAL = 0
              ,PROCESSEXPNOTE = NULL
              ,PROCESSENTLOADNOTE = NULL
              ,PROCESSENTVALNOTE = NULL
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,338 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,339 DEBUG [AIF]: CommData.purgeInvalidRecordsTDATASEG - START
    2013-11-08 11:49:14,341 DEBUG [AIF]:
            DELETE FROM TDATASEG
            WHERE LOADID = 22
              AND (
            PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
            AND VALID_FLAG = 'N'
    2013-11-08 11:49:14,342 DEBUG [AIF]: Number of Rows deleted from TDATASEG: 0
    2013-11-08 11:49:14,342 DEBUG [AIF]: CommData.purgeInvalidRecordsTDATASEG - END
    2013-11-08 11:49:14,344 DEBUG [AIF]: CommData.updateAppLoadAudit - START
    2013-11-08 11:49:14,344 DEBUG [AIF]:
            UPDATE AIF_APPL_LOAD_AUDIT
            SET EXPORT_TO_TARGET_FLAG = 'Y'
            WHERE LOADID = 22
            AND PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY= '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,345 DEBUG [AIF]: Number of Rows updated on AIF_APPL_LOAD_AUDIT: 1
    2013-11-08 11:49:14,345 DEBUG [AIF]: CommData.updateAppLoadAudit - END
    2013-11-08 11:49:14,345 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,346 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 21
              ,PROCESSEXP = 1
              ,PROCESSEXPNOTE = 'Export Successful'
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,347 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,347 DEBUG [AIF]: CommData.exportData - END
    2013-11-08 11:49:14,404 DEBUG [AIF]: HfmData.loadData - START
    2013-11-08 11:49:14,404 DEBUG [AIF]: CommData.getRuleInfo - START
    2013-11-08 11:49:14,404 DEBUG [AIF]:
            SELECT brl.RULE_ID
            ,br.RULE_NAME
            ,brl.PARTITIONKEY
            ,brl.CATKEY
            ,part.PARTVALGROUP
            ,br.SOURCE_SYSTEM_ID
            ,ss.SOURCE_SYSTEM_TYPE
            ,CASE
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'EBS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'PS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FUSION%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FILE%' THEN 'N'
              ELSE 'Y'
            END SOURCE_ADAPTER_FLAG
            ,app.APPLICATION_ID
            ,app.TARGET_APPLICATION_NAME
            ,app.TARGET_APPLICATION_TYPE
            ,app.DATA_LOAD_METHOD
            ,brl.PLAN_TYPE
            ,CASE brl.PLAN_TYPE
              WHEN 'PLAN1' THEN 1
              WHEN 'PLAN2' THEN 2
              WHEN 'PLAN3' THEN 3
              WHEN 'PLAN4' THEN 4
              WHEN 'PLAN5' THEN 5
              ELSE 0
            END PLAN_NUMBER
            ,br.INCL_ZERO_BALANCE_FLAG
            ,br.PERIOD_MAPPING_TYPE
            ,br.INCLUDE_ADJ_PERIODS_FLAG
            ,br.BALANCE_TYPE ACTUAL_FLAG
            ,br.AMOUNT_TYPE
            ,br.BALANCE_SELECTION
            ,br.BALANCE_METHOD_CODE
            ,COALESCE(br.SIGNAGE_METHOD, 'ABSOLUTE') SIGNAGE_METHOD
            ,br.CURRENCY_CODE
            ,br.BAL_SEG_VALUE_OPTION_CODE
            ,brl.EXECUTION_MODE
            ,COALESCE(brl.IMPORT_FROM_SOURCE_FLAG, 'Y') IMPORT_FROM_SOURCE_FLAG
            ,COALESCE(brl.RECALCULATE_FLAG, 'N') RECALCULATE_FLAG
            ,COALESCE(brl.EXPORT_TO_TARGET_FLAG, 'N') EXPORT_TO_TARGET_FLAG
            ,CASE
              WHEN (br.LEDGER_GROUP_ID IS NOT NULL) THEN 'MULTI'
              WHEN (br.SOURCE_LEDGER_ID IS NOT NULL) THEN 'SINGLE'
              ELSE 'NONE'
            END LEDGER_GROUP_CODE
            ,COALESCE(br.BALANCE_AMOUNT_BS, 'YTD') BALANCE_AMOUNT_BS
            ,COALESCE(br.BALANCE_AMOUNT_IS, 'PERIODIC') BALANCE_AMOUNT_IS
            ,br.LEDGER_GROUP
            ,(SELECT brd.DETAIL_CODE
              FROM AIF_BAL_RULE_DETAILS brd
              WHERE brd.RULE_ID = br.RULE_ID
              AND brd.DETAIL_TYPE = 'LEDGER'       
            ) PS_LEDGER
            ,CASE lg.LEDGER_TEMPLATE
              WHEN 'COMMITMENT' THEN 'Y'
              ELSE 'N'
            END KK_FLAG
            ,p.LAST_UPDATED_BY
            ,p.AIF_WEB_SERVICE_URL WEB_SERVICE_URL
            ,p.EPM_ORACLE_INSTANCE
            FROM AIF_PROCESSES p
            INNER JOIN AIF_BAL_RULE_LOADS brl
              ON brl.LOADID = p.PROCESS_ID
            INNER JOIN AIF_BALANCE_RULES br
              ON br.RULE_ID = brl.RULE_ID
            INNER JOIN AIF_SOURCE_SYSTEMS ss
              ON ss.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
            INNER JOIN AIF_TARGET_APPLICATIONS app
              ON app.APPLICATION_ID = brl.APPLICATION_ID
            INNER JOIN TPOVPARTITION part
              ON part.PARTITIONKEY = br.PARTITIONKEY
            INNER JOIN TBHVIMPGROUP imp
              ON imp.IMPGROUPKEY = part.PARTIMPGROUP
            LEFT OUTER JOIN AIF_COA_LEDGERS l
              ON l.SOURCE_SYSTEM_ID = p.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = COALESCE(br.SOURCE_LEDGER_ID,imp.IMPSOURCELEDGERID)
            LEFT OUTER JOIN AIF_PS_SET_CNTRL_REC_STG scr
              ON scr.SOURCE_SYSTEM_ID = l.SOURCE_SYSTEM_ID
              AND scr.SETCNTRLVALUE = l.SOURCE_LEDGER_NAME
              AND scr.RECNAME = 'LED_GRP_TBL'
            LEFT OUTER JOIN AIF_PS_LED_GRP_TBL_STG lg
              ON lg.SOURCE_SYSTEM_ID = scr.SOURCE_SYSTEM_ID
              AND lg.SETID = scr.SETID
              AND lg.LEDGER_GROUP = br.LEDGER_GROUP
            WHERE p.PROCESS_ID = 22
    2013-11-08 11:49:14,406 DEBUG [AIF]:
          SELECT adim.BALANCE_COLUMN_NAME DIMNAME
          ,adim.DIMENSION_ID
          ,dim.TARGET_DIMENSION_CLASS_NAME
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID1
          ) COA_SEGMENT_NAME1
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID2
          ) COA_SEGMENT_NAME2
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID3
          ) COA_SEGMENT_NAME3
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID4
          ) COA_SEGMENT_NAME4
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID5
          ) COA_SEGMENT_NAME5
          ,(SELECT CASE mdd.ORPHAN_OPTION_CODE
              WHEN 'CHILD' THEN 'N'
              WHEN 'ROOT' THEN 'N'
              ELSE 'Y'
            END DIMENSION_FILTER_FLAG
            FROM AIF_MAP_DIM_DETAILS_V mdd
            ,AIF_MAPPING_RULES mr
            WHERE mr.PARTITIONKEY = tpp.PARTITIONKEY
            AND mdd.RULE_ID = mr.RULE_ID
            AND mdd.DIMENSION_ID = adim.DIMENSION_ID
          ) DIMENSION_FILTER_FLAG
          ,tiie.IMPCONCATCHAR
          FROM TPOVPARTITION tpp
          INNER JOIN AIF_TARGET_APPL_DIMENSIONS adim
            ON adim.APPLICATION_ID = 2
          INNER JOIN AIF_DIMENSIONS dim
            ON dim.DIMENSION_ID = adim.DIMENSION_ID
          LEFT OUTER JOIN TBHVIMPITEMERPI tiie
            ON tiie.IMPGROUPKEY = tpp.PARTIMPGROUP
            AND tiie.IMPFLDFIELDNAME = adim.BALANCE_COLUMN_NAME
            AND tiie.IMPMAPTYPE = 'ERP'
          WHERE tpp.PARTITIONKEY = 1
          AND adim.BALANCE_COLUMN_NAME IS NOT NULL
          ORDER BY adim.BALANCE_COLUMN_NAME
    2013-11-08 11:49:14,407 DEBUG [AIF]: {'APPLICATION_ID': 2L, 'IMPORT_FROM_SOURCE_FLAG': u'N', 'PLAN_TYPE': None, 'RULE_NAME': u'VISIONRULE', 'ACTUAL_FLAG': u'A', 'IS_INCREMENTAL_LOAD': False, 'EPM_ORACLE_INSTANCE': u'C:\\Oracle\\Middleware\\user_projects\\epmsystem1', 'CATKEY': 1L, 'BAL_SEG_VALUE_OPTION_CODE': u'ALL', 'INCLUDE_ADJ_PERIODS_FLAG': u'N', 'PERIOD_MAPPING_TYPE': u'EXPLICIT', 'SOURCE_SYSTEM_TYPE': u'EBS_R12', 'LEDGER_GROUP': None, 'TARGET_APPLICATION_NAME': u'OASIS', 'RECALCULATE_FLAG': u'N', 'SOURCE_SYSTEM_ID': 2L, 'TEMP_DATA_TABLE_NAME': 'TDATASEG_T', 'KK_FLAG': u'N', 'AMOUNT_TYPE': u'MONETARY', 'EXPORT_TO_TARGET_FLAG': u'Y', 'DATA_TABLE_NAME': 'TDATASEG', 'DIMNAME_LIST': [u'ACCOUNT', u'ENTITY', u'ICP', u'UD1', u'UD2', u'UD3', u'UD4'], 'TDATAMAPTYPE': 'ERP', 'LAST_UPDATED_BY': u'admin', 'DIMNAME_MAP': {u'UD3': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT5', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD3', 'DIMENSION_ID': 9L}, u'ICP': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'ICP', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT7', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ICP', 'DIMENSION_ID': 8L}, u'ENTITY': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Entity', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT1', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ENTITY', 'DIMENSION_ID': 12L}, u'UD2': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT4', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD2', 'DIMENSION_ID': 11L}, u'UD4': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT6', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD4', 'DIMENSION_ID': 1L}, u'ACCOUNT': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Account', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT3', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ACCOUNT', 'DIMENSION_ID': 10L}, u'UD1': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT2', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD1', 'DIMENSION_ID': 7L}}, 'TARGET_APPLICATION_TYPE': u'HFM', 'PARTITIONKEY': 1L, 'PARTVALGROUP': u'[NONE]', 'LEDGER_GROUP_CODE': u'SINGLE', 'INCLUDE_ZERO_BALANCE_FLAG': u'N', 'EXECUTION_MODE': u'SNAPSHOT', 'PLAN_NUMBER': 0L, 'PS_LEDGER': None, 'BALANCE_SELECTION': u'FUNCTIONAL', 'BALANCE_AMOUNT_IS': u'PERIODIC', 'RULE_ID': 1L, 'BALANCE_AMOUNT_BS': u'YTD', 'CURRENCY_CODE': None, 'SOURCE_ADAPTER_FLAG': u'N', 'BALANCE_METHOD_CODE': u'STANDARD', 'SIGNAGE_METHOD': u'SAME', 'WEB_SERVICE_URL': u'http://localhost:9000/aif', 'DATA_LOAD_METHOD': u'CLASSIC_VIA_EPMI'}
    2013-11-08 11:49:14,407 DEBUG [AIF]: CommData.getRuleInfo - END
    2013-11-08 11:49:14,407 DEBUG [AIF]: CommData.getPovList - START
    2013-11-08 11:49:14,407 DEBUG [AIF]:
            SELECT PARTITIONKEY
            ,PARTNAME
            ,CATKEY
            ,CATNAME
            ,PERIODKEY
            ,COALESCE(PERIODDESC, TO_CHAR(PERIODKEY,'YYYY-MM-DD HH24:MI:SS')) PERIODDESC
            ,RULE_ID
            ,RULE_NAME
            FROM (
              SELECT DISTINCT brl.PARTITIONKEY
              ,part.PARTNAME
              ,brl.CATKEY
              ,cat.CATNAME
              ,pprd.PERIODKEY
              ,pp.PERIODDESC
              ,brl.RULE_ID
              ,br.RULE_NAME
              FROM AIF_BAL_RULE_LOADS brl
              INNER JOIN AIF_BALANCE_RULES br
                ON br.RULE_ID = brl.RULE_ID
              INNER JOIN TPOVPARTITION part
                ON part.PARTITIONKEY = brl.PARTITIONKEY
              INNER JOIN TPOVCATEGORY cat
                ON cat.CATKEY = brl.CATKEY
              INNER JOIN AIF_PROCESS_PERIODS pprd
                ON pprd.PROCESS_ID = brl.LOADID
              LEFT OUTER JOIN TPOVPERIOD pp
                ON pp.PERIODKEY = pprd.PERIODKEY
              WHERE brl.LOADID = 22
            ) q
            ORDER BY PARTITIONKEY
            ,CATKEY
            ,PERIODKEY
            ,RULE_ID
    2013-11-08 11:49:14,409 DEBUG [AIF]: CommData.getPovList - END
    2013-11-08 11:49:14,409 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,409 DEBUG [AIF]:
        SELECT tlp.PROCESSSTATUS
        ,tlps.PROCESSSTATUSDESC
        ,CASE WHEN (tlp.INTLOCKSTATE = 60) THEN 'Y' ELSE 'N' END LOCK_FLAG
        FROM TLOGPROCESS tlp
        ,TLOGPROCESSSTATES tlps
        WHERE tlp.PARTITIONKEY = 1
        AND tlp.CATKEY = 1
        AND tlp.PERIODKEY = '2012-01-01'
        AND tlp.RULE_ID = 1
        AND tlps.PROCESSSTATUSKEY = tlp.PROCESSSTATUS
    2013-11-08 11:49:14,410 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 30
              ,PROCESSENTLOAD = 0
              ,PROCESSENTVAL = 0
              ,PROCESSENTLOADNOTE = NULL
              ,PROCESSENTVALNOTE = NULL
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,411 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,412 DEBUG [AIF]:
        SELECT COALESCE(usr.PROFILE_OPTION_VALUE, app.PROFILE_OPTION_VALUE, site.PROFILE_OPTION_VALUE) PROFILE_OPTION_VALUE
        FROM AIF_PROFILE_OPTIONS po
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES site
          ON site.PROFILE_OPTION_NAME = po.PROFILE_OPTION_NAME
          AND site.LEVEL_ID = 1000
          AND site.LEVEL_VALUE = 0
          AND site.LEVEL_ID <= po.MAX_LEVEL_ID
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES app
          ON app.PROFILE_OPTION_NAME = site.PROFILE_OPTION_NAME
          AND app.LEVEL_ID = 1005
          AND app.LEVEL_VALUE = NULL
          AND app.LEVEL_ID <= po.MAX_LEVEL_ID
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES usr
          ON usr.PROFILE_OPTION_NAME = usr.PROFILE_OPTION_NAME
          AND usr.LEVEL_ID = 1010
          AND usr.LEVEL_VALUE = NULL
          AND usr.LEVEL_ID <= po.MAX_LEVEL_ID
        WHERE po.PROFILE_OPTION_NAME = 'JAVA_HOME'
    2013-11-08 11:49:14,413 DEBUG [AIF]: HFM Load command:
    %EPM_ORACLE_HOME%/products/FinancialDataQuality/bin/HFM_LOAD.vbs "22" "a9E3uvSJNhkFhEQTmuUFFUElfdQgKJKHrb1EsjRsL6yZJlXsOFcVPbGWHhpOQzl9zvHoo3s%2Bdq6R4yJhp0GMNWIKMTcizp3%2F8HASkA7rVufXDWEpAKAK%2BvcFmj4zLTI3rtrKHlVEYrOLMY453J2lXk6Cy771mNSD8X114CqaWSdUKGbKTRGNpgE3BfRGlEd1wZ3cra4ee0jUbT2aTaiqSN26oVe6dyxM3zolc%2BOPkjiDNk1MqwNr43tT3JsZz4qEQGF9d39DRN3CDjUuZRPt4SEKSSL35upncRJiw2uBOtV%2FvSuGLNpZ2J79v1%2Ba1Oo9c4Xhig7SFYbE6Jwk1yXRJLTSw0DKFu%2FEpcdjpOnx%2F6YawMBNIa5iu5L637S91jT1Xd3EGmxZFq%2Bi6bHdCJAC8g%3D%3D" "C%3A%5COracle%5CMiddleware%5Cuser_projects%5Cepmsystem1" "%25EPM_ORACLE_HOME%25%2F..%2Fjdk160_35"
    Is there anywhere we need to set EPM Home and also let me know what is PSU1 ?
    Thanks in advance.
    Praneeth

  • How to store data file name in one of the columns of staging table

    My requirement is to load data from .dat file to oracle staging table. I have done following steps:
    1. Created control file and stored in bin directory.
    2. Created data file and stored in bin directory.
    3. Registered a concurrent program with execution method as SQL*Loader.
    4. Added the concurrent program to request group.
    I am passing the file name as a parameter to concurrent program. When I am running the program, the data is getting loaded to the staging table correctly.
    Now I want to store the filename (which is passed as a parameter) in one of the columns of staging table. I tried different ways found through Google, but none of them worked. I am using the below control file:
    OPTIONS (SKIP = 1)
    LOAD DATA
    INFILE '&1'
    APPEND INTO TABLE XXCISCO_SO_INTF_STG_TB
    FIELDS TERMINATED BY ","
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    COUNTRY_NAME
    ,COUNTRY_CODE
    ,ORDER_CATEGORY
    ,ORDER_NUMBER
    ,RECORD_ID "XXCISCO_SO_INTF_STG_TB_S.NEXTVAL"
    ,FILE_NAME CONSTANT "&1"
    ,REQUEST_ID "fnd_global.conc_request_id"
    ,LAST_UPDATED_BY "FND_GLOBAL.USER_ID"
    ,LAST_UPDATE_DATE SYSDATE
    ,CREATED_BY "FND_GLOBAL.USER_ID"
    ,CREATION_DATE SYSDATE
    ,INTERFACE_STATUS CONSTANT "N"
    ,RECORD_STATUS CONSTANT "N"
    I want to store file name in the column FILE_NAME stated above. I tried with and without constant using "$1", "&1", ":$1", ":&1", &1, $1..... but none of them worked. Please suggest me the solution for this.
    Thanks,
    Abhay

    Pl post details of OS, database and EBS versions. There is no easy way to achieve this.
    Pl see previous threads on this topic
    SQL*Loader to insert data file name during load
    Sql Loader with new column
    HTH
    Srini

  • Synchronizing Updates on a Staging Table

    Please help me out with the resolving the following issue:
    A load script is running for moving records from a data file to a staging table.
    After this script completes, there is a code to update two fields of the staging table.
    To do this the shell script runs a script (generate_ranges.sql). It takes a parameter of 5000000. It creates ranges based on this passed in number upto the total number of rows in the staging table. So say the staging table has 65,000,000 rows.
    This script will create a file that looks like the following (when 5000000 is passed in):
    1 | 5000000
    5000001 | 10000000
    10000001 | 15000000
    15000001 | 20000000
    20000001 | 25000000
    25000001 | 30000000
    30000001 | 35000000
    35000001 | 40000000
    40000001 | 45000000
    45000001 | 50000000
    50000001 | 55000000
    55000001 | 60000000
    60000001 | 65000000
    The script goes on to read the data file for each row and it calls a shell script and passes in each range. So in this case there are 13 ranges. What is happening is there are 13 seperate updates on the staging table happening in the background.
    The first update rows 1 - 5000000, the second rows 5000001 - 10000000 etc.
    So there are 13 updates happenng behind the scenes.
    The problem is that there is no way for the script to know that all updates are completed successfully before proceeding automatically. Right now I manually check to see that that all updates completed and then I restart the script at the next step. However we want to code to ensure that all the updates are done automatically and then move on in the script. So we need a way to count the number of candidate updates ( right now 13 but could be 14 or more in future) and know that all "x" updates completed, it may be the case that update (1-5000000) is taking 30 mins and the next update ( 5000001 - 10000000) is taking 35 mins, all updates iare running parallely, and only when after the 13 parallel updates are complete, the script can proceed with subsequent steps.
    So please help me out with fixing this problem programmatically.
    Thanks for your cooperation in advance.
    Regards,
    Ayan.

    Ayan,
    Are you really sure you want to update 65 million rows ?
    Alternative: create table as select <columns with 2 columns updated> from staging table;
    While using this approach, you probably don't need to split the update.
    Regards,
    Rob.

  • How to load the data from a staging table to interface table

    Hi..
    I have a staging table having these many columns
    invoice_number,invoice_date,vendor_name,vendor_site_code,description,line-amount,line-description,segment1,segment2,segment3,segment4,segment5
    I want to insert data into oracle interface tables
    1st table is ap_invoices_interface which is primary
    and 2nd is ap_invoice_lines_interfaces.
    According to the invoice_id I have to insert the sum of amount in the amount column of primary table
    can anyone plz give the codes .
    any help appreciate

    Hi,
    you need to write pl/sql procedure or package for validiating the data and inserting.
    first u need to know wat r the mandatory colums. and write the code igiving here a simple example
    Create or replace procedure xxstg_po_vendors_int(errbuf out varchar2,retcode out number) IS
    Cursor po_cur IS Select sno,VENDOR_NAME,SUMMARY_FLAG,ENABLED_FLAG From xxstg_po_vendor;
    l_SUMMARY_FLAG Varchar(1);
    l_ENABLED_FLAG varchar(1);
    l_VENDOR_NAME varchar2(240);
    l_err_msg varchar2(240);
    l_flag varchar2(2);
    l_err_flag varchar2(2);
    Begin
    Delete from Ap_suppliers_INT;
    commit;
    for rec_cur in po_cur loop
    l_flag :='A';
    l_err_flag:= 'A';
    Begin
    select summary_flag into l_SUMMARY_FLAG from po_vendors
    where summary_flag = rec_cur.summary_flag;
    Exception
    when others then
    l_summary_flag:= null;
    l_flag:='E';
    l_err_msg:= 'Summary_flag Does not Exist';
    END;
    FND_FILe.PUT_LINE(FND_FILE.LOG,'Inserting data into interface table'||l_flag);
    Begin
    Select enabled_flag into l_enabled_flag from po_vendors
    where enabled_flag = rec_cur.enabled_flag;
    exception
    when others then
    l_enabled_flag:=null;
    l_flag :='E';
    L_err_msg:='Enabled_flag Does not Exist';
    End;
    FND_FILE.PUT_LINE(FND_FILE.log,'Inserting data into interface table'||l_flag);
    FND_FILE.PUT_LINE(FND_FILE.log,'Inserting data into interface table'||l_flag);
    INSERT INTO AP_SUPPLIERS_INT
    ( VENDOR_INTERFACE_ID,VENDOR_NAME,SUMMARY_FLAG,ENABLED_FLAG )
    values(rec_cur.sno,rec_cur.VENDOR_NAME,rec_cur.SUMMARY_FLAG,rec_cur.ENABLED_FLAG);
    l_flag :=null;
    l_err_msg:=null;
    end loop;
    commit;
    end;
    Regards
    Goutham

  • How to move data from a staging table to three entity tables #2

    Environment: SQL Server 2008 R2
    I have a few questions:
    How would I prevent duplicate records, when/ IF SSIS is executed many times?
    How would I know that all huge volume of data being loaded in the entity tables?
    In reference to "how to move data from a staging table to three entity tables ", since I am loading large volume of data, while using lookup transformation:
    which of the merge components is best suited.
    How to configure merge component correctly. (screen shot is preferred) 
    Please refer to the following link
    http://social.msdn.microsoft.com/Forums/en-US/5f2128c8-3ddd-4455-9076-05fa1902a62a/how-to-move-data-from-a-staging-table-to-three-entity-tables?forum=sqlintegrationservices

    You can use RowCount transformation in the path where you want to capture record details. Then inside rowcount transformation pass a integer variable to get count value inside
    the event handler can be configured as below
    Inside Execute SQL task add INSERT statement to add rowcount to your audit table
    Can you also show me how to Check against destination table using key columns inside a lookup task and insert only non
    matched records (No Match output)
    This is explained clearly in below link which Arthur posted
    http://www.sqlis.com/sqlis/post/Get-all-from-Table-A-that-isnt-in-Table-B.aspx
    For large data I would prefer doing this in T-SQL. So what you could do is dump data to staging table and then apply
    T-SQL MERGE between tables (or even a combination of INSERT/UPDATE statements)
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Injecting data into a star schema from a flat staging table

    I'm trying to work out a best approach for getting data from a very flat staging table and then loading it into a star schema - I take a row from a table with for example 50 different attributes about a person and then load these into a host of different tables, including linking tables.
    One of the attibutes in the staging table will be an instruction to either insert the person and their new data, or update a person and some component of their data or maybe even to terminate a persons records.
    I plan to use PL/SQL but I'm not sure on the best approach.
    The staging table data will be loaded every 10 minutes and will contain about 300 updates.
    I'm not sure if I should just select the staging records into a cursor then insert into the various tables?
    Has anyone got any working examples based on a similar experience?
    I can provide a working example if required.

    The database has some elements that make SQL a tad harder to use?
    For example:
    CREATE TABLE staging
    (person_id NUMBER(10) NOT NULL ,
    title VARCHAR2(15) NULL ,
    initials VARCHAR2(5) NULL ,
    forename VARCHAR2(30) NULL ,
    middle_name VARCHAR2(30) NULL ,
    surname VARCHAR2(50) NULL,
    dial_number VARCHAR2(30) NULL,
    Is_Contactable     CHAR(1) NULL);
    INSERT INTO staging
    (person_id, title, initials, forename, middle_name, surname, dial_number)
    VALUES ('12345', 'Mr', 'NULL', 'Joe', NULL, 'Bloggs', '0117512345','Y')
    CREATE TABLE person
    (person_id NUMBER(10) NOT NULL ,
    title VARCHAR2(15) NULL ,
    initials VARCHAR2(5) NULL ,
    forename VARCHAR2(30) NULL ,
    middle_name VARCHAR2(30) NULL ,
    surname VARCHAR2(50) NULL);
    CREATE UNIQUE INDEX XPKPerson ON Person
    (Person_ID ASC);
    ALTER TABLE Person
    ADD CONSTRAINT XPKPerson PRIMARY KEY (Person_ID);
    CREATE TABLE person_comm
    (person_id NUMBER(10) NOT NULL ,
    comm_type_id NUMBER(10) NOT NULL ,
    comm_id NUMBER(10) NOT NULL );
    CREATE UNIQUE INDEX XPKPerson_Comm ON Person_Comm
    (Person_ID ASC,Comm_Type_ID ASC,Comm_ID ASC);
    ALTER TABLE Person_Comm
    ADD CONSTRAINT XPKPerson_Comm PRIMARY KEY (Person_ID,Comm_Type_ID,Comm_ID);
    CREATE TABLE person_comm_preference
    (person_id NUMBER(10) NOT NULL ,
    comm_type_id NUMBER(10) NOT NULL
    Is_Contactable     CHAR(1) NULL);
    CREATE UNIQUE INDEX XPKPerson_Comm_Preference ON Person_Comm_Preference
    (Person_ID ASC,Comm_Type_ID ASC);
    ALTER TABLE Person_Comm_Preference
    ADD CONSTRAINT XPKPerson_Comm_Preference PRIMARY KEY (Person_ID,Comm_Type_ID);
    CREATE TABLE comm_type
    comm_type_id NUMBER(10) NOT NULL ,
    NAME VARCHAR2(25) NULL ,
    description VARCHAR2(100) NULL ,
    comm_table_name VARCHAR2(50) NULL);
    CREATE UNIQUE INDEX XPKComm_Type ON Comm_Type
    (Comm_Type_ID ASC);
    ALTER TABLE Comm_Type
    ADD CONSTRAINT XPKComm_Type PRIMARY KEY (Comm_Type_ID);
    insert into comm_type (comm_type_id, NAME, description, comm_table_name) values ('23456','HOME PHONE','Home Phone Number','PHONE');
    CREATE TABLE phone
    (phone_id NUMBER(10) NOT NULL ,
    dial_number VARCHAR2(30) NULL);
    Take the record from Staging then update:
    'person'
    'Person_Comm_Preference' Based on a comm_type of 'HOME_PHONE'
    'person_comm' Derived from 'Person' and 'Person_Comm_Preference'
    Then update 'Phone' with the number based on a link derived from 'Phone' which is made up of Person_Comm Primary_Key where 'Comm_ID' (part of that composite key)
    relates to the Phone table Primary_Key which is Phone_ID.
    Does you head hurt as much as mine?

Maybe you are looking for