Staging table in separate schema or separate database

Hello,
We are using SSIS to import data from 2 external sources into an OLTP database. The data from the sources needs to be inserted into different tables which contain foreign key relationships and therefore the insertion order matters.
We have therefore decided to insert the data into staging tables using fast load and execute a stored procedure to insert the data into our tables, and finally truncate the staging table data.
Is this scenario, is it better to use a separate
schema for our staging data or a separate database? For our staging data, we do not require any backups or logs and the data will be truncated as soon as the stored procedures finish execution.

Hello,
We are using SSIS to import data from 2 external sources into an OLTP database.
The data from the sources needs to be inserted into different tables which contain foreign key relationships and therefore the insertion order matters.
We have therefore decided to insert the data into staging tables using fast load and execute a stored procedure to insert the data into our tables, and finally truncate the staging table data.
I think instead of the staging tables you can directly load the data to the OLTP tables. Since, you have PK-FK tables in OLTP, of course you need to load the parent tables first. In SSIS, you can execute the parent table data flow task (DFT) first followed
by the child tables DFT's. Also if you think, there may be scenario's where the source can send "orphan" records, use the error row redirection mechanism to shunt off the bad records for manual inspection. You can very well build the "alerts"
in your package if any rows get redirected, so that the team be altered to fix them and maintain data consistency..
Thanks, hsbal

Similar Messages

  • Join multiple tables across separate databases

    I needed to perform a join on multiple tables in two separate databases. I am using Sybase ASE 12.5 and jConnect 5.5 JDBC driver.
    Table A is in DB1 and Table B is in DB2. Both DB1 and DB2 reside on the same database server.
    I have set up JNDI bound datasources DS1 for DB1 and DS2 for DB2.
    If the queries involved single tables or multiple tables within the same database, it is simple. But I am not seeing any way where I can perform a join between Tables A and B, which reside in separate databases. The datasources DS1 and DS2 are associated with DB1 and DB2 respectively, so I am not sure if there is anything like performing joins using two data sources.
    One alternative I am facing is using a stored procedure in one database refer the table in the other database, for the join. But I may not be allowed to modify the database; currently, I am allowed to perform READ-ONLY queries.
    So I am looking for any and all options. Please advice.
    Thanks!

    Two choices..
    One find the syntax in DB2 that allows you to do what you want in there. That would often be somelike...
    select a.myfield
    from database1.table1 a, database2.table2 b
    where a.id = b.id
    Obviously the database itself must support this. If it doesn't then you have choice two.
    You extract the data from database1 using java for table1.
    You extract the data from database2 using java for table2.
    You write java code that merges the data from both sources.

  • How to view tables in another schema in the database

    I am starting to use the SQl Developer 1.5.
    We can connect to an oracle database successfully, expanding the tables, it shows the list of tables of one schema.
    We have other schema in the database. In the query panel, when we type in the name of the other schema such as tcs. then some table names will popup in the intellisense.
    How can we show the list tables in other schema within the same database.

    In SQL Developer left panel, there's a browse tree. There's a 'Other Users' branch under each database, expand that you will see all the option to check user's objects. Your user need to have check dictionary privilege

  • How to create table in another schema of same database

    Hi..
    I've a database DB1
    and has 2 schemas / USers in that..
    Usr1 and Usr2...
    And i created a TEMP table in Usr1 schema... and created
    Then tried the following statement in Usr2 schema...
    CREATE TABLE TEMP AS SELECT * FROM Usr1.TEMP;
    Then it's giving error that ...
    :00942 TABLE OR VIEW DOESN'T EXIST..
    What is the reason for that...
    Thank you

    josh1612 wrote:
    What other grants do i need to give so as to replicate the Primary Keys also...????That's not a matter of grants. It's the way the CREATE TABLE AS SELECT statement works. It does not copy over indexes, primary key constraints, unique constraints, foreign key constraints, etc.
    If you want to copy all that over, you would probably want to get the DDL from the original table (using the DBMS_METADATA package if you're in a recent version), modify the DDL with the new schema name, create the table, indexes, and constraints, and then do an INSERT ... SELECT to populate the data. Or do an export & import of the table from one schema to another.
    Justin

  • How to copy data, table rows between tables in different schema

    how do I copy rows from one table in one schema into another table in another schema in same database?

    Hi,
    Give an Access to the respective table in Source Schema to Destinations Schema. If table does not exits in the destination schema use CTAS else Insert into..etc., with your select Query.
    - Pavan Kumar N

  • Injecting data into a star schema from a flat staging table

    I'm trying to work out a best approach for getting data from a very flat staging table and then loading it into a star schema - I take a row from a table with for example 50 different attributes about a person and then load these into a host of different tables, including linking tables.
    One of the attibutes in the staging table will be an instruction to either insert the person and their new data, or update a person and some component of their data or maybe even to terminate a persons records.
    I plan to use PL/SQL but I'm not sure on the best approach.
    The staging table data will be loaded every 10 minutes and will contain about 300 updates.
    I'm not sure if I should just select the staging records into a cursor then insert into the various tables?
    Has anyone got any working examples based on a similar experience?
    I can provide a working example if required.

    The database has some elements that make SQL a tad harder to use?
    For example:
    CREATE TABLE staging
    (person_id NUMBER(10) NOT NULL ,
    title VARCHAR2(15) NULL ,
    initials VARCHAR2(5) NULL ,
    forename VARCHAR2(30) NULL ,
    middle_name VARCHAR2(30) NULL ,
    surname VARCHAR2(50) NULL,
    dial_number VARCHAR2(30) NULL,
    Is_Contactable     CHAR(1) NULL);
    INSERT INTO staging
    (person_id, title, initials, forename, middle_name, surname, dial_number)
    VALUES ('12345', 'Mr', 'NULL', 'Joe', NULL, 'Bloggs', '0117512345','Y')
    CREATE TABLE person
    (person_id NUMBER(10) NOT NULL ,
    title VARCHAR2(15) NULL ,
    initials VARCHAR2(5) NULL ,
    forename VARCHAR2(30) NULL ,
    middle_name VARCHAR2(30) NULL ,
    surname VARCHAR2(50) NULL);
    CREATE UNIQUE INDEX XPKPerson ON Person
    (Person_ID ASC);
    ALTER TABLE Person
    ADD CONSTRAINT XPKPerson PRIMARY KEY (Person_ID);
    CREATE TABLE person_comm
    (person_id NUMBER(10) NOT NULL ,
    comm_type_id NUMBER(10) NOT NULL ,
    comm_id NUMBER(10) NOT NULL );
    CREATE UNIQUE INDEX XPKPerson_Comm ON Person_Comm
    (Person_ID ASC,Comm_Type_ID ASC,Comm_ID ASC);
    ALTER TABLE Person_Comm
    ADD CONSTRAINT XPKPerson_Comm PRIMARY KEY (Person_ID,Comm_Type_ID,Comm_ID);
    CREATE TABLE person_comm_preference
    (person_id NUMBER(10) NOT NULL ,
    comm_type_id NUMBER(10) NOT NULL
    Is_Contactable     CHAR(1) NULL);
    CREATE UNIQUE INDEX XPKPerson_Comm_Preference ON Person_Comm_Preference
    (Person_ID ASC,Comm_Type_ID ASC);
    ALTER TABLE Person_Comm_Preference
    ADD CONSTRAINT XPKPerson_Comm_Preference PRIMARY KEY (Person_ID,Comm_Type_ID);
    CREATE TABLE comm_type
    comm_type_id NUMBER(10) NOT NULL ,
    NAME VARCHAR2(25) NULL ,
    description VARCHAR2(100) NULL ,
    comm_table_name VARCHAR2(50) NULL);
    CREATE UNIQUE INDEX XPKComm_Type ON Comm_Type
    (Comm_Type_ID ASC);
    ALTER TABLE Comm_Type
    ADD CONSTRAINT XPKComm_Type PRIMARY KEY (Comm_Type_ID);
    insert into comm_type (comm_type_id, NAME, description, comm_table_name) values ('23456','HOME PHONE','Home Phone Number','PHONE');
    CREATE TABLE phone
    (phone_id NUMBER(10) NOT NULL ,
    dial_number VARCHAR2(30) NULL);
    Take the record from Staging then update:
    'person'
    'Person_Comm_Preference' Based on a comm_type of 'HOME_PHONE'
    'person_comm' Derived from 'Person' and 'Person_Comm_Preference'
    Then update 'Phone' with the number based on a link derived from 'Phone' which is made up of Person_Comm Primary_Key where 'Comm_ID' (part of that composite key)
    relates to the Phone table Primary_Key which is Phone_ID.
    Does you head hurt as much as mine?

  • Creating report from 2 separate database

    All,
    Is there a way to create a report using the "Crystal Reports" plugin for Eclipse to select data from 2 separate databases.  I've joined the 2 tables on the data tab and added some of the fields to the layout tab.  Below is the SQL that has been generated thus far:
    jdbc:db2://localhost:50000/PS2Tx
    SELECT "BC_INFO"."BC_STATUS", "BC_INFO"."BC_START_TS", "BC_INFO"."BC_END_TS", "BC_INFO"."BC_ID" FROM   "PAYMENT"."BC_INFO" "BC_INFO"
    EXTERNAL JOIN BC_INFO.BC_ID={?jdbc:db2://localhost:50000/BPRQx: TRANSACTION.BATCHCLOSE_ID}
    jdbc:db2://localhost:50000/BPRQx
    SELECT "TRANSACTION"."TRANS_TIMESTAMP", "TRANSACTION"."BATCHCLOSE_ID" FROM   "FAWLS"."TRANSACTION" "TRANSACTION" WHERE  ("TRANSACTION"."TRANS_TIMESTAMP">="BC_INFO"."BC_START_TS" AND "TRANSACTION"."TRANS_TIMESTAMP"<="BC_INFO"."BC_END_TS")
    The values from the PS2Tx database will be displayed as the first page of the report, which will then allow drilldowns to the details which will come from the BPRQx database.  If I preview the report as it currently is I receive the following error:
    JDBC Error: DB2 SQL error: SQLCODE: -206, SQLSTATE: 42703, SQLERRMC: BC_INFO.BC_START_TS
    I'd really appreciate any help or guidance anyone could provide?
    Thanks,
    Todd

    All,
    We've made changes to the report and I've gotten around this issue by using a subreport linked to the main report.
    Thanks,
    Todd

  • Separate database instead of local

    I want to use completely separate database in CAF instead of local database of the SAP WAS. Can it be achieved somehow through configuration or minimal programming?

    Hi,
    If i am getting it correct you asking for the area we are using DATASOURCE name.
    public static final String DATASOURCENAME="jdbc/MYSQLDB_DS1_ALIAS1";
    Context ctx=null;
              try{
                   ctx=new InitialContext();
                   ds=(DataSource)ctx.lookup(DATASOURCENAME);
              catch(NamingException ne){
                   PatentLogger.getInstance().logException(CLASSNAME,"DAOFactory()",ne);
    This in your class for connecting to DB.
    Regards,
    Srinivasan Subbiah

  • APEX  How do I create a Form based on a Database Table from other schema

    My APEX schema is WISEXP. I have a database table that resides in WISDW schema on the same database. I want to create Tabular form based on this table.
    I am not able to create a FORM based on the table from WISDW schema. I am able to create a FORM based on SQL though from this table in WISDW schema - but it does not do any action on Update/Insert rows.
    Appropriate Synonyms and grants are created. Is there a limitation or am I missing something? Please advise.
    thanks
    Rupen

    If Rupen is using 2.2 or 2.2.1, it is likely he is running into the bug described here: Re: Workspace to Schema Assignments
    In that case, it may be a necessary and sufficient workaround to assign the foreign schema to the workspace.
    Scott
    Message was edited by:
    sspadafo

  • Can Oracle 11g RAC support 2 separate databases?

    1. Can Oracle 11g RAC support 2 separate databases given that it asks for SCAN ip for cluster with a single Listener port number?
    2. Will the Oracle 11.2 database connections work successfully if I setup a mandatory DNS server within the Oracle RAC servers?
    I'm leading a team for a 11g project and appreciate responses.
    Thanks
    Satish.

    No Real Application Cluster technology has ever supported more than one database.Quite untrue.
    I know that both you and Tom Kyte take the stand that a cluster should have only 1 instance per node.
    That doesn't mean that it is not possible to have more than one database in a cluster -- i.e more than one instance in a node.
    That doesn't mean that it must always be so. Installing a separate cluster and additional EE + RAC licences for a separate database can be quite expensive. Of course, your reply is "consolidate". That is another debate that has been going on for years.
    A discussion on multiple databases on RAC is also at
    http://www.freelists.org/post/oracle-l/multiple-databases-in-a-single-RAC-cluster
    Hemant K Chitale
    Edited by: Hemant K Chitale on Jan 12, 2010 11:15 AM

  • Comparison of schemas across different database Instances

    Hi All,
    I need to compare around 8 schemas across 4 different instances. We expect them to be same (atleast constraints and indxes) but we have some exceptions. Don't have any database link. I tried using PL/SQL developer , it gives only changes for source db vs target db. Which is the best tool for this purpose?

    Here is for remote Schema comparison:
    compareRemoteSchemaDDL.sql
    This script compares the database tables and columns between
    two schemas in separate databases (local schema and a remote
    schema).
    This script should be run from the schema of the local user.
    A database link will need to be created in order to compare
    the schemas.
    column sid_name new_value local_schema
    column this_user new_value local_user
    select substr(global_name, 1, (instr(global_name, '.')) - 1) sid_name, user this_user
    from global_name;
    select db_link from user_db_links;
    define remote_schema=&which_db_link
    spool &spool_file
    set verify off
    column len format 999
    column cons_column format a20 truncate
    column object_name format a30
    break on table_name nodup skip 1
    set pages 999
    set lines 110
    prompt ********************************************************************
    prompt *
    prompt * Comparison of Local Data Base to Remote Data Base
    prompt *
    prompt * User: &local_user
    prompt *
    prompt * Local Data Base: &local_schema
    prompt * Remote Data Base: &remote_schema
    prompt *
    prompt ********************************************************************
    prompt **** Additional Tables on Local DB &local_schema ****
    Select table_name from user_tables
    MINUS
    Select table_name from user_tables@&remote_schema
    order by 1;
    prompt **** Additional Tables on Remote DB &remote_schema ****
    Select table_name from user_tables@&remote_schema
    MINUS
    Select table_name from user_tables
    order by 1;
    prompt **** Additional Columns on Local DB &local_schema ****
    Select table_name, column_name from user_tab_columns
    MINUS
    Select table_name, column_name from user_tab_columns@&remote_schema
    order by 1, 2;
    prompt **** Additional Columns on Remote DB &remote_schema ****
    Select table_name, column_name from user_tab_columns@&remote_schema
    MINUS
    Select table_name, column_name from user_tab_columns
    order by 1, 2;
    prompt **** Columns Changed on Local DB &local_schema ****
    Select c1.table_name, c1.column_name, c1.data_type, c1.data_length len
    from user_tab_columns c1, user_tab_columns@&remote_schema c2
    where c1.table_name = c2.table_name and c1.column_name = c2.column_name
    and ( c1.data_type <> c2.data_type or c1.data_length <> c2.data_length
    or c1.nullable <> c2.nullable
    or nvl(c1.data_precision,0) <> nvl(c2.data_precision,0)
    or nvl(c1.data_scale,0) <> nvl(c2.data_scale,0) )
    order by 1, 2;
    prompt **** Additional Indexes on Local DB &local_schema ****
    Select decode(substr(INDEX_NAME,1,4), 'SYS_', 'SYS_', INDEX_NAME) INDEX_NAME
    from user_indexes
    MINUS
    Select decode(substr(INDEX_NAME,1,4), 'SYS_', 'SYS_', INDEX_NAME) INDEX_NAME
    from user_indexes@&remote_schema
    order by 1;
    prompt **** Additional Indexes on Remote DB &remote_schema ****
    Select decode(substr(INDEX_NAME,1,4), 'SYS_', 'SYS_', INDEX_NAME) INDEX_NAME
    from user_indexes@&remote_schema
    MINUS
    Select decode(substr(INDEX_NAME,1,4), 'SYS_', 'SYS_', INDEX_NAME) INDEX_NAME
    from user_indexes;
    prompt **** Additional Objects on Local DB &local_schema ****
    Select object_name, object_type from user_objects
    where object_type in ( 'PACKAGE', 'FUNCTION', 'PROCEDURE', 'VIEW', 'SEQUENCE' )
    MINUS
    Select object_name, object_type from user_objects@&remote_schema
    where object_type in ( 'PACKAGE', 'FUNCTION', 'PROCEDURE', 'VIEW', 'SEQUENCE' )
    order by 1, 2;
    prompt **** Additional Objects on Remote DB &remote_schema ****
    Select object_name, object_type from user_objects@&remote_schema
    where object_type in ( 'PACKAGE', 'FUNCTION', 'PROCEDURE', 'VIEW', 'SEQUENCE' )
    MINUS
    Select object_name, object_type from user_objects
    where object_type in ( 'PACKAGE', 'FUNCTION', 'PROCEDURE', 'VIEW', 'SEQUENCE' )
    order by 1, 2;
    prompt **** Additional Triggers on Local DB &local_schema ****
    Select trigger_name, trigger_type, table_name from user_triggers
    MINUS
    Select trigger_name, trigger_type, table_name from user_triggers@&remote_schema
    order by 1;
    prompt **** Additional Triggers on Remote DB &remote_schema ****
    Select trigger_name, trigger_type, table_name from user_triggers@&remote_schema
    MINUS
    Select trigger_name, trigger_type, table_name from user_triggers
    order by 1;
    Prompt **** Additional Constraints on Local DB &local_schema ****
    Select TABLE_NAME,
    decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
    CONSTRAINT_NAME ) CONSTRAINT_NAME,
    CONSTRAINT_TYPE
    from USER_CONSTRAINTS
    MINUS
    Select TABLE_NAME,
    decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
    CONSTRAINT_NAME ) CONSTRAINT_NAME,
    CONSTRAINT_TYPE
    from USER_CONSTRAINTS@&remote_schema
    order by 1, 2, 3;
    Prompt **** Additional Constraints on Remote DB &remote_schema ****
    Select TABLE_NAME,
    decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
    CONSTRAINT_NAME ) CONSTRAINT_NAME,
    CONSTRAINT_TYPE
    from USER_CONSTRAINTS@&remote_schema
    MINUS
    Select TABLE_NAME,
    decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
    CONSTRAINT_NAME ) CONSTRAINT_NAME,
    CONSTRAINT_TYPE
    from USER_CONSTRAINTS
    order by 1, 2, 3;
    Prompt **** Additional Constraints Columns on Local DB &local_schema ****
    Select TABLE_NAME,
    decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
    CONSTRAINT_NAME ) CONSTRAINT_NAME,
    COLUMN_NAME cons_column
    from USER_CONS_COLUMNS
    MINUS
    Select TABLE_NAME,
    decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
    CONSTRAINT_NAME ) CONSTRAINT_NAME,
    COLUMN_NAME cons_column
    from USER_CONS_COLUMNS@&remote_schema
    order by 1, 2, 3;
    Prompt **** Additional Constraints Columns on Remote DB &remote_schema ****
    Select TABLE_NAME,
    decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
    CONSTRAINT_NAME ) CONSTRAINT_NAME,
    COLUMN_NAME cons_column
    from USER_CONS_COLUMNS@&remote_schema
    MINUS
    Select TABLE_NAME,
    decode ( substr (CONSTRAINT_NAME,1,5), 'SYS_C', 'SYS_C',
    CONSTRAINT_NAME ) CONSTRAINT_NAME,
    COLUMN_NAME cons_column
    from USER_CONS_COLUMNS
    order by 1, 2, 3;
    prompt **** Additional Public Synonyms on Local DB &local_schema ****
    Select owner, synonym_name from dba_synonyms
    where owner = 'PUBLIC'
    and table_owner = UPPER('&local_user')
    MINUS
    Select owner, synonym_name from dba_synonyms@&remote_schema
    where owner = 'PUBLIC'
    and table_owner = UPPER('&local_user')
    order by 1;
    prompt **** Additional Public Synonyms on Remote DB &remote_schema ****
    Select owner, synonym_name from dba_synonyms@&remote_schema
    where owner = 'PUBLIC'
    and table_owner = UPPER('&local_user')
    MINUS
    Select owner, synonym_name from dba_synonyms
    where owner = 'PUBLIC'
    and table_owner = UPPER('&local_user')
    order by 1;
    spool off
    Daljit Singh

  • Unable to load data from FDMEE staging table to HFM tables

    Hi,
    We have installed EPM 11.1.2.3 with all latest related products (ODI/FDMEE) in our development environment
    We are inprocess of loading data from EBS R12 to HFM using ERPI (Data Managemwnt in EPM 11.1.2.3). We could Import and validate the data but when we try to Export the data to HFM, the process continuously running hours (neither it gives any error nor it completes).
    When we check the process details in ODI work repository, following processes are completed successfully
    COMM_LOAD_BALANCES - Running .........(From past 1 day, still running)
    EBS_GL_LOAD_BALANCES_DATA - Successful
    COMM_LOAD_BALANCES - Successful
    Whereas we could load data in staging table of FDMEE database schema and moreover we are even able to drill through to source system (EBS R12) from Data load workbench but not able to load the data into HFM application.
    Log details from the process are below.
    2013-11-05 17:04:59,747 INFO  [AIF]: FDMEE Process Start, Process ID: 31
    2013-11-05 17:04:59,747 INFO  [AIF]: FDMEE Logging Level: 4
    2013-11-05 17:04:59,748 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_31.log
    2013-11-05 17:04:59,748 INFO  [AIF]: User:admin
    2013-11-05 17:04:59,748 INFO  [AIF]: Location:VisionLoc (Partitionkey:1)
    2013-11-05 17:04:59,749 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-05 17:04:59,749 INFO  [AIF]: Category Name:VisionCat (Category key:3)
    2013-11-05 17:04:59,749 INFO  [AIF]: Rule Name:VisionRule (Rule ID:2)
    2013-11-05 17:05:00,844 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-05 17:05:00,844 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-05 17:05:02,910 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-05 17:05:02,953 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-05 17:05:03,030 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-05 17:05:03,108 INFO  [AIF]:
    Move Data for Period 'JAN'
    Any help on above is much appreciated.
    Thank you
    Regards
    Praneeth

    Hi,
    I have followed the steps 1 & 2 above. Noe the log shows something like below
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Process Start, Process ID: 22
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Logging Level: 4
    2013-11-05 09:47:31,179 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_22.log
    2013-11-05 09:47:31,180 INFO  [AIF]: User:admin
    2013-11-05 09:47:31,180 INFO  [AIF]: Location:VisionLoc (Partitionkey:1)
    2013-11-05 09:47:31,180 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-05 09:47:31,180 INFO  [AIF]: Category Name:VisionCat (Category key:3)
    2013-11-05 09:47:31,181 INFO  [AIF]: Rule Name:VisionRule (Rule ID:2)
    2013-11-05 09:47:32,378 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-05 09:47:32,378 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-05 09:47:34,652 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-05 09:47:34,698 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-05 09:47:34,744 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-05 09:47:34,828 INFO  [AIF]:
    Move Data for Period 'JAN'
    2013-11-08 11:49:10,478 INFO  [AIF]: FDMEE Process Start, Process ID: 22
    2013-11-08 11:49:10,493 INFO  [AIF]: FDMEE Logging Level: 5
    2013-11-08 11:49:10,493 INFO  [AIF]: FDMEE Log File: C:\FDMEE\outbox\logs\OASIS_22.log
    2013-11-08 11:49:10,494 INFO  [AIF]: User:admin
    2013-11-08 11:49:10,494 INFO  [AIF]: Location:VISIONLOC (Partitionkey:1)
    2013-11-08 11:49:10,494 INFO  [AIF]: Period Name:JAN (Period Key:1/1/12 12:00 AM)
    2013-11-08 11:49:10,495 INFO  [AIF]: Category Name:VISIONCAT (Category key:1)
    2013-11-08 11:49:10,495 INFO  [AIF]: Rule Name:VISIONRULE (Rule ID:1)
    2013-11-08 11:49:11,903 INFO  [AIF]: Jython Version: 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54)
    [Oracle JRockit(R) (Oracle Corporation)]
    2013-11-08 11:49:11,909 INFO  [AIF]: Java Platform: java1.6.0_37
    2013-11-08 11:49:14,037 INFO  [AIF]: -------START IMPORT STEP-------
    2013-11-08 11:49:14,105 INFO  [AIF]: -------END IMPORT STEP-------
    2013-11-08 11:49:14,152 INFO  [AIF]: -------START EXPORT STEP-------
    2013-11-08 11:49:14,178 DEBUG [AIF]: CommData.exportData - START
    2013-11-08 11:49:14,183 DEBUG [AIF]: CommData.getRuleInfo - START
    2013-11-08 11:49:14,188 DEBUG [AIF]:
            SELECT brl.RULE_ID
            ,br.RULE_NAME
            ,brl.PARTITIONKEY
            ,brl.CATKEY
            ,part.PARTVALGROUP
            ,br.SOURCE_SYSTEM_ID
            ,ss.SOURCE_SYSTEM_TYPE
            ,CASE
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'EBS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'PS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FUSION%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FILE%' THEN 'N'
              ELSE 'Y'
            END SOURCE_ADAPTER_FLAG
            ,app.APPLICATION_ID
            ,app.TARGET_APPLICATION_NAME
            ,app.TARGET_APPLICATION_TYPE
            ,app.DATA_LOAD_METHOD
            ,brl.PLAN_TYPE
            ,CASE brl.PLAN_TYPE
              WHEN 'PLAN1' THEN 1
              WHEN 'PLAN2' THEN 2
              WHEN 'PLAN3' THEN 3
              WHEN 'PLAN4' THEN 4
              WHEN 'PLAN5' THEN 5
              ELSE 0
            END PLAN_NUMBER
            ,br.INCL_ZERO_BALANCE_FLAG
            ,br.PERIOD_MAPPING_TYPE
            ,br.INCLUDE_ADJ_PERIODS_FLAG
            ,br.BALANCE_TYPE ACTUAL_FLAG
            ,br.AMOUNT_TYPE
            ,br.BALANCE_SELECTION
            ,br.BALANCE_METHOD_CODE
            ,COALESCE(br.SIGNAGE_METHOD, 'ABSOLUTE') SIGNAGE_METHOD
            ,br.CURRENCY_CODE
            ,br.BAL_SEG_VALUE_OPTION_CODE
            ,brl.EXECUTION_MODE
            ,COALESCE(brl.IMPORT_FROM_SOURCE_FLAG, 'Y') IMPORT_FROM_SOURCE_FLAG
            ,COALESCE(brl.RECALCULATE_FLAG, 'N') RECALCULATE_FLAG
            ,COALESCE(brl.EXPORT_TO_TARGET_FLAG, 'N') EXPORT_TO_TARGET_FLAG
            ,CASE
              WHEN (br.LEDGER_GROUP_ID IS NOT NULL) THEN 'MULTI'
              WHEN (br.SOURCE_LEDGER_ID IS NOT NULL) THEN 'SINGLE'
              ELSE 'NONE'
            END LEDGER_GROUP_CODE
            ,COALESCE(br.BALANCE_AMOUNT_BS, 'YTD') BALANCE_AMOUNT_BS
            ,COALESCE(br.BALANCE_AMOUNT_IS, 'PERIODIC') BALANCE_AMOUNT_IS
            ,br.LEDGER_GROUP
            ,(SELECT brd.DETAIL_CODE
              FROM AIF_BAL_RULE_DETAILS brd
              WHERE brd.RULE_ID = br.RULE_ID
              AND brd.DETAIL_TYPE = 'LEDGER'       
            ) PS_LEDGER
            ,CASE lg.LEDGER_TEMPLATE
              WHEN 'COMMITMENT' THEN 'Y'
              ELSE 'N'
            END KK_FLAG
            ,p.LAST_UPDATED_BY
            ,p.AIF_WEB_SERVICE_URL WEB_SERVICE_URL
            ,p.EPM_ORACLE_INSTANCE
            FROM AIF_PROCESSES p
            INNER JOIN AIF_BAL_RULE_LOADS brl
              ON brl.LOADID = p.PROCESS_ID
            INNER JOIN AIF_BALANCE_RULES br
              ON br.RULE_ID = brl.RULE_ID
            INNER JOIN AIF_SOURCE_SYSTEMS ss
              ON ss.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
            INNER JOIN AIF_TARGET_APPLICATIONS app
              ON app.APPLICATION_ID = brl.APPLICATION_ID
            INNER JOIN TPOVPARTITION part
              ON part.PARTITIONKEY = br.PARTITIONKEY
            INNER JOIN TBHVIMPGROUP imp
              ON imp.IMPGROUPKEY = part.PARTIMPGROUP
            LEFT OUTER JOIN AIF_COA_LEDGERS l
              ON l.SOURCE_SYSTEM_ID = p.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = COALESCE(br.SOURCE_LEDGER_ID,imp.IMPSOURCELEDGERID)
            LEFT OUTER JOIN AIF_PS_SET_CNTRL_REC_STG scr
              ON scr.SOURCE_SYSTEM_ID = l.SOURCE_SYSTEM_ID
              AND scr.SETCNTRLVALUE = l.SOURCE_LEDGER_NAME
              AND scr.RECNAME = 'LED_GRP_TBL'
            LEFT OUTER JOIN AIF_PS_LED_GRP_TBL_STG lg
              ON lg.SOURCE_SYSTEM_ID = scr.SOURCE_SYSTEM_ID
              AND lg.SETID = scr.SETID
              AND lg.LEDGER_GROUP = br.LEDGER_GROUP
            WHERE p.PROCESS_ID = 22
    2013-11-08 11:49:14,195 DEBUG [AIF]:
          SELECT adim.BALANCE_COLUMN_NAME DIMNAME
          ,adim.DIMENSION_ID
          ,dim.TARGET_DIMENSION_CLASS_NAME
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID1
          ) COA_SEGMENT_NAME1
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID2
          ) COA_SEGMENT_NAME2
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID3
          ) COA_SEGMENT_NAME3
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID4
          ) COA_SEGMENT_NAME4
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID5
          ) COA_SEGMENT_NAME5
          ,(SELECT CASE mdd.ORPHAN_OPTION_CODE
              WHEN 'CHILD' THEN 'N'
              WHEN 'ROOT' THEN 'N'
              ELSE 'Y'
            END DIMENSION_FILTER_FLAG
            FROM AIF_MAP_DIM_DETAILS_V mdd
            ,AIF_MAPPING_RULES mr
            WHERE mr.PARTITIONKEY = tpp.PARTITIONKEY
            AND mdd.RULE_ID = mr.RULE_ID
            AND mdd.DIMENSION_ID = adim.DIMENSION_ID
          ) DIMENSION_FILTER_FLAG
          ,tiie.IMPCONCATCHAR
          FROM TPOVPARTITION tpp
          INNER JOIN AIF_TARGET_APPL_DIMENSIONS adim
            ON adim.APPLICATION_ID = 2
          INNER JOIN AIF_DIMENSIONS dim
            ON dim.DIMENSION_ID = adim.DIMENSION_ID
          LEFT OUTER JOIN TBHVIMPITEMERPI tiie
            ON tiie.IMPGROUPKEY = tpp.PARTIMPGROUP
            AND tiie.IMPFLDFIELDNAME = adim.BALANCE_COLUMN_NAME
            AND tiie.IMPMAPTYPE = 'ERP'
          WHERE tpp.PARTITIONKEY = 1
          AND adim.BALANCE_COLUMN_NAME IS NOT NULL
          ORDER BY adim.BALANCE_COLUMN_NAME
    2013-11-08 11:49:14,197 DEBUG [AIF]: {'APPLICATION_ID': 2L, 'IMPORT_FROM_SOURCE_FLAG': u'N', 'PLAN_TYPE': None, 'RULE_NAME': u'VISIONRULE', 'ACTUAL_FLAG': u'A', 'IS_INCREMENTAL_LOAD': False, 'EPM_ORACLE_INSTANCE': u'C:\\Oracle\\Middleware\\user_projects\\epmsystem1', 'CATKEY': 1L, 'BAL_SEG_VALUE_OPTION_CODE': u'ALL', 'INCLUDE_ADJ_PERIODS_FLAG': u'N', 'PERIOD_MAPPING_TYPE': u'EXPLICIT', 'SOURCE_SYSTEM_TYPE': u'EBS_R12', 'LEDGER_GROUP': None, 'TARGET_APPLICATION_NAME': u'OASIS', 'RECALCULATE_FLAG': u'N', 'SOURCE_SYSTEM_ID': 2L, 'TEMP_DATA_TABLE_NAME': 'TDATASEG_T', 'KK_FLAG': u'N', 'AMOUNT_TYPE': u'MONETARY', 'EXPORT_TO_TARGET_FLAG': u'Y', 'DATA_TABLE_NAME': 'TDATASEG', 'DIMNAME_LIST': [u'ACCOUNT', u'ENTITY', u'ICP', u'UD1', u'UD2', u'UD3', u'UD4'], 'TDATAMAPTYPE': 'ERP', 'LAST_UPDATED_BY': u'admin', 'DIMNAME_MAP': {u'UD3': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT5', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD3', 'DIMENSION_ID': 9L}, u'ICP': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'ICP', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT7', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ICP', 'DIMENSION_ID': 8L}, u'ENTITY': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Entity', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT1', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ENTITY', 'DIMENSION_ID': 12L}, u'UD2': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT4', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD2', 'DIMENSION_ID': 11L}, u'UD4': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT6', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD4', 'DIMENSION_ID': 1L}, u'ACCOUNT': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Account', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT3', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ACCOUNT', 'DIMENSION_ID': 10L}, u'UD1': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT2', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD1', 'DIMENSION_ID': 7L}}, 'TARGET_APPLICATION_TYPE': u'HFM', 'PARTITIONKEY': 1L, 'PARTVALGROUP': u'[NONE]', 'LEDGER_GROUP_CODE': u'SINGLE', 'INCLUDE_ZERO_BALANCE_FLAG': u'N', 'EXECUTION_MODE': u'SNAPSHOT', 'PLAN_NUMBER': 0L, 'PS_LEDGER': None, 'BALANCE_SELECTION': u'FUNCTIONAL', 'BALANCE_AMOUNT_IS': u'PERIODIC', 'RULE_ID': 1L, 'BALANCE_AMOUNT_BS': u'YTD', 'CURRENCY_CODE': None, 'SOURCE_ADAPTER_FLAG': u'N', 'BALANCE_METHOD_CODE': u'STANDARD', 'SIGNAGE_METHOD': u'SAME', 'WEB_SERVICE_URL': u'http://localhost:9000/aif', 'DATA_LOAD_METHOD': u'CLASSIC_VIA_EPMI'}
    2013-11-08 11:49:14,197 DEBUG [AIF]: CommData.getRuleInfo - END
    2013-11-08 11:49:14,224 DEBUG [AIF]: CommData.insertPeriods - START
    2013-11-08 11:49:14,228 DEBUG [AIF]: CommData.getLedgerListAndMap - START
    2013-11-08 11:49:14,229 DEBUG [AIF]: CommData.getLedgerSQL - START
    2013-11-08 11:49:14,229 DEBUG [AIF]: CommData.getLedgerSQL - END
    2013-11-08 11:49:14,229 DEBUG [AIF]:
              SELECT l.SOURCE_LEDGER_ID
              ,l.SOURCE_LEDGER_NAME
              ,l.SOURCE_COA_ID
              ,l.CALENDAR_ID
              ,'0' SETID
              ,l.PERIOD_TYPE
              ,NULL LEDGER_TABLE_NAME
              FROM AIF_BALANCE_RULES br
              ,AIF_COA_LEDGERS l
              WHERE br.RULE_ID = 1
              AND l.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = br.SOURCE_LEDGER_ID
    2013-11-08 11:49:14,230 DEBUG [AIF]: CommData.getLedgerListAndMap - END
    2013-11-08 11:49:14,232 DEBUG [AIF]:
            INSERT INTO AIF_PROCESS_PERIODS (
              PROCESS_ID
              ,PERIODKEY
              ,PERIOD_ID
              ,ADJUSTMENT_PERIOD_FLAG
              ,GL_PERIOD_YEAR
              ,GL_PERIOD_NUM
              ,GL_PERIOD_NAME
              ,GL_PERIOD_CODE
              ,GL_EFFECTIVE_PERIOD_NUM
              ,YEARTARGET
              ,PERIODTARGET
              ,IMP_ENTITY_TYPE
              ,IMP_ENTITY_ID
              ,IMP_ENTITY_NAME
              ,TRANS_ENTITY_TYPE
              ,TRANS_ENTITY_ID
              ,TRANS_ENTITY_NAME
              ,PRIOR_PERIOD_FLAG
              ,SOURCE_LEDGER_ID
                    SELECT DISTINCT brl.LOADID PROCESS_ID
                    ,pp.PERIODKEY PERIODKEY
                    ,prd.PERIOD_ID
                    ,COALESCE(prd.ADJUSTMENT_PERIOD_FLAG, 'N') ADJUSTMENT_PERIOD_FLAG
                    ,COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) GL_PERIOD_YEAR
                    ,COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0) GL_PERIOD_NUM
                    ,COALESCE(prd.PERIOD_NAME, ppsrc.GL_PERIOD_NAME,'0') GL_PERIOD_NAME
                    ,COALESCE(prd.PERIOD_CODE, CAST(COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0) AS VARCHAR(38)),'0') GL_PERIOD_CODE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) GL_EFFECTIVE_PERIOD_NUM
                    ,COALESCE(ppa.YEARTARGET, pp.YEARTARGET) YEARTARGET
                    ,COALESCE(ppa.PERIODTARGET, pp.PERIODTARGET) PERIODTARGET
                    ,'PROCESS_BAL_IMP' IMP_ENTITY_TYPE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) IMP_ENTITY_ID
                    ,COALESCE(prd.PERIOD_NAME, ppsrc.GL_PERIOD_NAME,'0') IMP_ENTITY_NAME
                    ,'PROCESS_BAL_TRANS' TRANS_ENTITY_TYPE
                    ,(COALESCE(prd.YEAR, ppsrc.GL_PERIOD_YEAR,0) * 10000 + COALESCE(prd.PERIOD_NUM, ppsrc.GL_PERIOD_NUM,0)) TRANS_ENTITY_ID
                    ,pp.PERIODDESC TRANS_ENTITY_NAME
                    ,'N' PRIOR_PERIOD_FLAG
                    ,2202 SOURCE_LEDGER_ID
                    FROM (
                      AIF_BAL_RULE_LOADS brl
                      INNER JOIN TPOVCATEGORY pc
                        ON pc.CATKEY = brl.CATKEY
                      INNER JOIN TPOVPERIOD_FLAT_V pp
                        ON pp.PERIODFREQ = pc.CATFREQ
                        AND pp.PERIODKEY >= brl.START_PERIODKEY
                        AND pp.PERIODKEY <= brl.END_PERIODKEY
                      LEFT OUTER JOIN TPOVPERIODADAPTOR_FLAT_V ppa
                        ON ppa.PERIODKEY = pp.PERIODKEY
                        AND ppa.PERIODFREQ = pp.PERIODFREQ
                        AND ppa.INTSYSTEMKEY = 'OASIS'
                    INNER JOIN TPOVPERIODSOURCE ppsrc
                      ON ppsrc.PERIODKEY = pp.PERIODKEY
                      AND ppsrc.MAPPING_TYPE = 'EXPLICIT'
                      AND ppsrc.SOURCE_SYSTEM_ID = 2
                      AND ppsrc.CALENDAR_ID IN ('29067')
                    LEFT OUTER JOIN AIF_GL_PERIODS_STG prd
                      ON prd.PERIOD_ID = ppsrc.PERIOD_ID
                      AND prd.SOURCE_SYSTEM_ID = ppsrc.SOURCE_SYSTEM_ID
                      AND prd.CALENDAR_ID = ppsrc.CALENDAR_ID
              AND prd.SETID = '0'
              AND prd.PERIOD_TYPE = '507'
                      AND prd.ADJUSTMENT_PERIOD_FLAG = 'N'
                    WHERE brl.LOADID = 22
                    ORDER BY pp.PERIODKEY
                    ,GL_EFFECTIVE_PERIOD_NUM
    2013-11-08 11:49:14,235 DEBUG [AIF]: CommData.insertPeriods - END
    2013-11-08 11:49:14,240 DEBUG [AIF]: CommData.moveData - START
    2013-11-08 11:49:14,242 DEBUG [AIF]: CommData.getPovList - START
    2013-11-08 11:49:14,242 DEBUG [AIF]:
            SELECT PARTITIONKEY
            ,PARTNAME
            ,CATKEY
            ,CATNAME
            ,PERIODKEY
            ,COALESCE(PERIODDESC, TO_CHAR(PERIODKEY,'YYYY-MM-DD HH24:MI:SS')) PERIODDESC
            ,RULE_ID
            ,RULE_NAME
            FROM (
              SELECT DISTINCT brl.PARTITIONKEY
              ,part.PARTNAME
              ,brl.CATKEY
              ,cat.CATNAME
              ,pprd.PERIODKEY
              ,pp.PERIODDESC
              ,brl.RULE_ID
              ,br.RULE_NAME
              FROM AIF_BAL_RULE_LOADS brl
              INNER JOIN AIF_BALANCE_RULES br
                ON br.RULE_ID = brl.RULE_ID
              INNER JOIN TPOVPARTITION part
                ON part.PARTITIONKEY = brl.PARTITIONKEY
              INNER JOIN TPOVCATEGORY cat
                ON cat.CATKEY = brl.CATKEY
              INNER JOIN AIF_PROCESS_PERIODS pprd
                ON pprd.PROCESS_ID = brl.LOADID
              LEFT OUTER JOIN TPOVPERIOD pp
                ON pp.PERIODKEY = pprd.PERIODKEY
              WHERE brl.LOADID = 22
            ) q
            ORDER BY PARTITIONKEY
            ,CATKEY
            ,PERIODKEY
            ,RULE_ID
    2013-11-08 11:49:14,244 DEBUG [AIF]: CommData.getPovList - END
    2013-11-08 11:49:14,245 INFO  [AIF]:
    Move Data for Period 'JAN'
    2013-11-08 11:49:14,246 DEBUG [AIF]:
              UPDATE TDATASEG
              SET LOADID = 22
              WHERE PARTITIONKEY = 1
              AND CATKEY = 1
              AND RULE_ID = 1
              AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,320 DEBUG [AIF]: Number of Rows updated on TDATASEG: 1842
    2013-11-08 11:49:14,320 DEBUG [AIF]:
            INSERT INTO AIF_APPL_LOAD_AUDIT (
              LOADID
              ,TARGET_APPLICATION_TYPE
              ,TARGET_APPLICATION_NAME
              ,PLAN_TYPE
              ,SOURCE_LEDGER_ID
              ,EPM_YEAR
              ,EPM_PERIOD
              ,SNAPSHOT_FLAG
              ,SEGMENT_FILTER_FLAG
              ,PARTITIONKEY
              ,CATKEY
              ,RULE_ID
              ,PERIODKEY
              ,EXPORT_TO_TARGET_FLAG
            SELECT DISTINCT 22
            ,TARGET_APPLICATION_TYPE
            ,TARGET_APPLICATION_NAME
            ,PLAN_TYPE
            ,SOURCE_LEDGER_ID
            ,EPM_YEAR
            ,EPM_PERIOD
            ,SNAPSHOT_FLAG
            ,SEGMENT_FILTER_FLAG
            ,PARTITIONKEY
            ,CATKEY
            ,RULE_ID
            ,PERIODKEY
            ,'N'
            FROM AIF_APPL_LOAD_AUDIT
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND RULE_ID = 1
            AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,321 DEBUG [AIF]: Number of Rows inserted into AIF_APPL_LOAD_AUDIT: 1
    2013-11-08 11:49:14,322 DEBUG [AIF]:
            INSERT INTO AIF_APPL_LOAD_PRD_AUDIT (
              LOADID
              ,GL_PERIOD_ID
              ,GL_PERIOD_YEAR
              ,DELTA_RUN_ID
              ,PARTITIONKEY
              ,CATKEY
              ,RULE_ID
              ,PERIODKEY
            SELECT DISTINCT 22
            ,GL_PERIOD_ID
            ,GL_PERIOD_YEAR
            ,DELTA_RUN_ID
            ,PARTITIONKEY
            ,CATKEY
            ,RULE_ID
            ,PERIODKEY
            FROM AIF_APPL_LOAD_PRD_AUDIT
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND RULE_ID = 1
            AND LOADID < 22
                AND PERIODKEY = '2012-01-01'
    2013-11-08 11:49:14,323 DEBUG [AIF]: Number of Rows inserted into AIF_APPL_LOAD_PRD_AUDIT: 1
    2013-11-08 11:49:14,325 DEBUG [AIF]: CommData.moveData - END
    2013-11-08 11:49:14,332 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,332 DEBUG [AIF]:
        SELECT tlp.PROCESSSTATUS
        ,tlps.PROCESSSTATUSDESC
        ,CASE WHEN (tlp.INTLOCKSTATE = 60) THEN 'Y' ELSE 'N' END LOCK_FLAG
        FROM TLOGPROCESS tlp
        ,TLOGPROCESSSTATES tlps
        WHERE tlp.PARTITIONKEY = 1
        AND tlp.CATKEY = 1
        AND tlp.PERIODKEY = '2012-01-01'
        AND tlp.RULE_ID = 1
        AND tlps.PROCESSSTATUSKEY = tlp.PROCESSSTATUS
    2013-11-08 11:49:14,336 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 20
              ,PROCESSEXP = 0
              ,PROCESSENTLOAD = 0
              ,PROCESSENTVAL = 0
              ,PROCESSEXPNOTE = NULL
              ,PROCESSENTLOADNOTE = NULL
              ,PROCESSENTVALNOTE = NULL
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,338 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,339 DEBUG [AIF]: CommData.purgeInvalidRecordsTDATASEG - START
    2013-11-08 11:49:14,341 DEBUG [AIF]:
            DELETE FROM TDATASEG
            WHERE LOADID = 22
              AND (
            PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
            AND VALID_FLAG = 'N'
    2013-11-08 11:49:14,342 DEBUG [AIF]: Number of Rows deleted from TDATASEG: 0
    2013-11-08 11:49:14,342 DEBUG [AIF]: CommData.purgeInvalidRecordsTDATASEG - END
    2013-11-08 11:49:14,344 DEBUG [AIF]: CommData.updateAppLoadAudit - START
    2013-11-08 11:49:14,344 DEBUG [AIF]:
            UPDATE AIF_APPL_LOAD_AUDIT
            SET EXPORT_TO_TARGET_FLAG = 'Y'
            WHERE LOADID = 22
            AND PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY= '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,345 DEBUG [AIF]: Number of Rows updated on AIF_APPL_LOAD_AUDIT: 1
    2013-11-08 11:49:14,345 DEBUG [AIF]: CommData.updateAppLoadAudit - END
    2013-11-08 11:49:14,345 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,346 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 21
              ,PROCESSEXP = 1
              ,PROCESSEXPNOTE = 'Export Successful'
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,347 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,347 DEBUG [AIF]: CommData.exportData - END
    2013-11-08 11:49:14,404 DEBUG [AIF]: HfmData.loadData - START
    2013-11-08 11:49:14,404 DEBUG [AIF]: CommData.getRuleInfo - START
    2013-11-08 11:49:14,404 DEBUG [AIF]:
            SELECT brl.RULE_ID
            ,br.RULE_NAME
            ,brl.PARTITIONKEY
            ,brl.CATKEY
            ,part.PARTVALGROUP
            ,br.SOURCE_SYSTEM_ID
            ,ss.SOURCE_SYSTEM_TYPE
            ,CASE
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'EBS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'PS%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FUSION%' THEN 'N'
              WHEN ss.SOURCE_SYSTEM_TYPE LIKE 'FILE%' THEN 'N'
              ELSE 'Y'
            END SOURCE_ADAPTER_FLAG
            ,app.APPLICATION_ID
            ,app.TARGET_APPLICATION_NAME
            ,app.TARGET_APPLICATION_TYPE
            ,app.DATA_LOAD_METHOD
            ,brl.PLAN_TYPE
            ,CASE brl.PLAN_TYPE
              WHEN 'PLAN1' THEN 1
              WHEN 'PLAN2' THEN 2
              WHEN 'PLAN3' THEN 3
              WHEN 'PLAN4' THEN 4
              WHEN 'PLAN5' THEN 5
              ELSE 0
            END PLAN_NUMBER
            ,br.INCL_ZERO_BALANCE_FLAG
            ,br.PERIOD_MAPPING_TYPE
            ,br.INCLUDE_ADJ_PERIODS_FLAG
            ,br.BALANCE_TYPE ACTUAL_FLAG
            ,br.AMOUNT_TYPE
            ,br.BALANCE_SELECTION
            ,br.BALANCE_METHOD_CODE
            ,COALESCE(br.SIGNAGE_METHOD, 'ABSOLUTE') SIGNAGE_METHOD
            ,br.CURRENCY_CODE
            ,br.BAL_SEG_VALUE_OPTION_CODE
            ,brl.EXECUTION_MODE
            ,COALESCE(brl.IMPORT_FROM_SOURCE_FLAG, 'Y') IMPORT_FROM_SOURCE_FLAG
            ,COALESCE(brl.RECALCULATE_FLAG, 'N') RECALCULATE_FLAG
            ,COALESCE(brl.EXPORT_TO_TARGET_FLAG, 'N') EXPORT_TO_TARGET_FLAG
            ,CASE
              WHEN (br.LEDGER_GROUP_ID IS NOT NULL) THEN 'MULTI'
              WHEN (br.SOURCE_LEDGER_ID IS NOT NULL) THEN 'SINGLE'
              ELSE 'NONE'
            END LEDGER_GROUP_CODE
            ,COALESCE(br.BALANCE_AMOUNT_BS, 'YTD') BALANCE_AMOUNT_BS
            ,COALESCE(br.BALANCE_AMOUNT_IS, 'PERIODIC') BALANCE_AMOUNT_IS
            ,br.LEDGER_GROUP
            ,(SELECT brd.DETAIL_CODE
              FROM AIF_BAL_RULE_DETAILS brd
              WHERE brd.RULE_ID = br.RULE_ID
              AND brd.DETAIL_TYPE = 'LEDGER'       
            ) PS_LEDGER
            ,CASE lg.LEDGER_TEMPLATE
              WHEN 'COMMITMENT' THEN 'Y'
              ELSE 'N'
            END KK_FLAG
            ,p.LAST_UPDATED_BY
            ,p.AIF_WEB_SERVICE_URL WEB_SERVICE_URL
            ,p.EPM_ORACLE_INSTANCE
            FROM AIF_PROCESSES p
            INNER JOIN AIF_BAL_RULE_LOADS brl
              ON brl.LOADID = p.PROCESS_ID
            INNER JOIN AIF_BALANCE_RULES br
              ON br.RULE_ID = brl.RULE_ID
            INNER JOIN AIF_SOURCE_SYSTEMS ss
              ON ss.SOURCE_SYSTEM_ID = br.SOURCE_SYSTEM_ID
            INNER JOIN AIF_TARGET_APPLICATIONS app
              ON app.APPLICATION_ID = brl.APPLICATION_ID
            INNER JOIN TPOVPARTITION part
              ON part.PARTITIONKEY = br.PARTITIONKEY
            INNER JOIN TBHVIMPGROUP imp
              ON imp.IMPGROUPKEY = part.PARTIMPGROUP
            LEFT OUTER JOIN AIF_COA_LEDGERS l
              ON l.SOURCE_SYSTEM_ID = p.SOURCE_SYSTEM_ID
              AND l.SOURCE_LEDGER_ID = COALESCE(br.SOURCE_LEDGER_ID,imp.IMPSOURCELEDGERID)
            LEFT OUTER JOIN AIF_PS_SET_CNTRL_REC_STG scr
              ON scr.SOURCE_SYSTEM_ID = l.SOURCE_SYSTEM_ID
              AND scr.SETCNTRLVALUE = l.SOURCE_LEDGER_NAME
              AND scr.RECNAME = 'LED_GRP_TBL'
            LEFT OUTER JOIN AIF_PS_LED_GRP_TBL_STG lg
              ON lg.SOURCE_SYSTEM_ID = scr.SOURCE_SYSTEM_ID
              AND lg.SETID = scr.SETID
              AND lg.LEDGER_GROUP = br.LEDGER_GROUP
            WHERE p.PROCESS_ID = 22
    2013-11-08 11:49:14,406 DEBUG [AIF]:
          SELECT adim.BALANCE_COLUMN_NAME DIMNAME
          ,adim.DIMENSION_ID
          ,dim.TARGET_DIMENSION_CLASS_NAME
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID1
          ) COA_SEGMENT_NAME1
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID2
          ) COA_SEGMENT_NAME2
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID3
          ) COA_SEGMENT_NAME3
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID4
          ) COA_SEGMENT_NAME4
          ,(SELECT COA_SEGMENT_NAME
            FROM AIF_COA_SEGMENTS cs
            WHERE cs.COA_LINE_ID = tiie.IMPSOURCECOALINEID5
          ) COA_SEGMENT_NAME5
          ,(SELECT CASE mdd.ORPHAN_OPTION_CODE
              WHEN 'CHILD' THEN 'N'
              WHEN 'ROOT' THEN 'N'
              ELSE 'Y'
            END DIMENSION_FILTER_FLAG
            FROM AIF_MAP_DIM_DETAILS_V mdd
            ,AIF_MAPPING_RULES mr
            WHERE mr.PARTITIONKEY = tpp.PARTITIONKEY
            AND mdd.RULE_ID = mr.RULE_ID
            AND mdd.DIMENSION_ID = adim.DIMENSION_ID
          ) DIMENSION_FILTER_FLAG
          ,tiie.IMPCONCATCHAR
          FROM TPOVPARTITION tpp
          INNER JOIN AIF_TARGET_APPL_DIMENSIONS adim
            ON adim.APPLICATION_ID = 2
          INNER JOIN AIF_DIMENSIONS dim
            ON dim.DIMENSION_ID = adim.DIMENSION_ID
          LEFT OUTER JOIN TBHVIMPITEMERPI tiie
            ON tiie.IMPGROUPKEY = tpp.PARTIMPGROUP
            AND tiie.IMPFLDFIELDNAME = adim.BALANCE_COLUMN_NAME
            AND tiie.IMPMAPTYPE = 'ERP'
          WHERE tpp.PARTITIONKEY = 1
          AND adim.BALANCE_COLUMN_NAME IS NOT NULL
          ORDER BY adim.BALANCE_COLUMN_NAME
    2013-11-08 11:49:14,407 DEBUG [AIF]: {'APPLICATION_ID': 2L, 'IMPORT_FROM_SOURCE_FLAG': u'N', 'PLAN_TYPE': None, 'RULE_NAME': u'VISIONRULE', 'ACTUAL_FLAG': u'A', 'IS_INCREMENTAL_LOAD': False, 'EPM_ORACLE_INSTANCE': u'C:\\Oracle\\Middleware\\user_projects\\epmsystem1', 'CATKEY': 1L, 'BAL_SEG_VALUE_OPTION_CODE': u'ALL', 'INCLUDE_ADJ_PERIODS_FLAG': u'N', 'PERIOD_MAPPING_TYPE': u'EXPLICIT', 'SOURCE_SYSTEM_TYPE': u'EBS_R12', 'LEDGER_GROUP': None, 'TARGET_APPLICATION_NAME': u'OASIS', 'RECALCULATE_FLAG': u'N', 'SOURCE_SYSTEM_ID': 2L, 'TEMP_DATA_TABLE_NAME': 'TDATASEG_T', 'KK_FLAG': u'N', 'AMOUNT_TYPE': u'MONETARY', 'EXPORT_TO_TARGET_FLAG': u'Y', 'DATA_TABLE_NAME': 'TDATASEG', 'DIMNAME_LIST': [u'ACCOUNT', u'ENTITY', u'ICP', u'UD1', u'UD2', u'UD3', u'UD4'], 'TDATAMAPTYPE': 'ERP', 'LAST_UPDATED_BY': u'admin', 'DIMNAME_MAP': {u'UD3': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT5', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD3', 'DIMENSION_ID': 9L}, u'ICP': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'ICP', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT7', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ICP', 'DIMENSION_ID': 8L}, u'ENTITY': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Entity', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT1', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ENTITY', 'DIMENSION_ID': 12L}, u'UD2': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT4', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD2', 'DIMENSION_ID': 11L}, u'UD4': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT6', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD4', 'DIMENSION_ID': 1L}, u'ACCOUNT': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Account', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT3', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'ACCOUNT', 'DIMENSION_ID': 10L}, u'UD1': {'IMPCONCATCHAR': None, 'TARGET_DIMENSION_CLASS_NAME': u'Generic', 'COA_SEGMENT_NAME5': None, 'COA_SEGMENT_NAME1': u'SEGMENT2', 'COA_SEGMENT_NAME2': None, 'COA_SEGMENT_NAME3': None, 'DIMENSION_FILTER_FLAG': None, 'COA_SEGMENT_NAME4': None, 'DIMNAME': u'UD1', 'DIMENSION_ID': 7L}}, 'TARGET_APPLICATION_TYPE': u'HFM', 'PARTITIONKEY': 1L, 'PARTVALGROUP': u'[NONE]', 'LEDGER_GROUP_CODE': u'SINGLE', 'INCLUDE_ZERO_BALANCE_FLAG': u'N', 'EXECUTION_MODE': u'SNAPSHOT', 'PLAN_NUMBER': 0L, 'PS_LEDGER': None, 'BALANCE_SELECTION': u'FUNCTIONAL', 'BALANCE_AMOUNT_IS': u'PERIODIC', 'RULE_ID': 1L, 'BALANCE_AMOUNT_BS': u'YTD', 'CURRENCY_CODE': None, 'SOURCE_ADAPTER_FLAG': u'N', 'BALANCE_METHOD_CODE': u'STANDARD', 'SIGNAGE_METHOD': u'SAME', 'WEB_SERVICE_URL': u'http://localhost:9000/aif', 'DATA_LOAD_METHOD': u'CLASSIC_VIA_EPMI'}
    2013-11-08 11:49:14,407 DEBUG [AIF]: CommData.getRuleInfo - END
    2013-11-08 11:49:14,407 DEBUG [AIF]: CommData.getPovList - START
    2013-11-08 11:49:14,407 DEBUG [AIF]:
            SELECT PARTITIONKEY
            ,PARTNAME
            ,CATKEY
            ,CATNAME
            ,PERIODKEY
            ,COALESCE(PERIODDESC, TO_CHAR(PERIODKEY,'YYYY-MM-DD HH24:MI:SS')) PERIODDESC
            ,RULE_ID
            ,RULE_NAME
            FROM (
              SELECT DISTINCT brl.PARTITIONKEY
              ,part.PARTNAME
              ,brl.CATKEY
              ,cat.CATNAME
              ,pprd.PERIODKEY
              ,pp.PERIODDESC
              ,brl.RULE_ID
              ,br.RULE_NAME
              FROM AIF_BAL_RULE_LOADS brl
              INNER JOIN AIF_BALANCE_RULES br
                ON br.RULE_ID = brl.RULE_ID
              INNER JOIN TPOVPARTITION part
                ON part.PARTITIONKEY = brl.PARTITIONKEY
              INNER JOIN TPOVCATEGORY cat
                ON cat.CATKEY = brl.CATKEY
              INNER JOIN AIF_PROCESS_PERIODS pprd
                ON pprd.PROCESS_ID = brl.LOADID
              LEFT OUTER JOIN TPOVPERIOD pp
                ON pp.PERIODKEY = pprd.PERIODKEY
              WHERE brl.LOADID = 22
            ) q
            ORDER BY PARTITIONKEY
            ,CATKEY
            ,PERIODKEY
            ,RULE_ID
    2013-11-08 11:49:14,409 DEBUG [AIF]: CommData.getPovList - END
    2013-11-08 11:49:14,409 DEBUG [AIF]: CommData.updateWorkflow - START
    2013-11-08 11:49:14,409 DEBUG [AIF]:
        SELECT tlp.PROCESSSTATUS
        ,tlps.PROCESSSTATUSDESC
        ,CASE WHEN (tlp.INTLOCKSTATE = 60) THEN 'Y' ELSE 'N' END LOCK_FLAG
        FROM TLOGPROCESS tlp
        ,TLOGPROCESSSTATES tlps
        WHERE tlp.PARTITIONKEY = 1
        AND tlp.CATKEY = 1
        AND tlp.PERIODKEY = '2012-01-01'
        AND tlp.RULE_ID = 1
        AND tlps.PROCESSSTATUSKEY = tlp.PROCESSSTATUS
    2013-11-08 11:49:14,410 DEBUG [AIF]:
            UPDATE TLOGPROCESS
            SET PROCESSENDTIME = CURRENT_TIMESTAMP
            ,PROCESSSTATUS = 30
              ,PROCESSENTLOAD = 0
              ,PROCESSENTVAL = 0
              ,PROCESSENTLOADNOTE = NULL
              ,PROCESSENTVALNOTE = NULL
            WHERE PARTITIONKEY = 1
            AND CATKEY = 1
            AND PERIODKEY = '2012-01-01'
            AND RULE_ID = 1
    2013-11-08 11:49:14,411 DEBUG [AIF]: CommData.updateWorkflow - END
    2013-11-08 11:49:14,412 DEBUG [AIF]:
        SELECT COALESCE(usr.PROFILE_OPTION_VALUE, app.PROFILE_OPTION_VALUE, site.PROFILE_OPTION_VALUE) PROFILE_OPTION_VALUE
        FROM AIF_PROFILE_OPTIONS po
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES site
          ON site.PROFILE_OPTION_NAME = po.PROFILE_OPTION_NAME
          AND site.LEVEL_ID = 1000
          AND site.LEVEL_VALUE = 0
          AND site.LEVEL_ID <= po.MAX_LEVEL_ID
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES app
          ON app.PROFILE_OPTION_NAME = site.PROFILE_OPTION_NAME
          AND app.LEVEL_ID = 1005
          AND app.LEVEL_VALUE = NULL
          AND app.LEVEL_ID <= po.MAX_LEVEL_ID
        LEFT OUTER JOIN AIF_PROFILE_OPTION_VALUES usr
          ON usr.PROFILE_OPTION_NAME = usr.PROFILE_OPTION_NAME
          AND usr.LEVEL_ID = 1010
          AND usr.LEVEL_VALUE = NULL
          AND usr.LEVEL_ID <= po.MAX_LEVEL_ID
        WHERE po.PROFILE_OPTION_NAME = 'JAVA_HOME'
    2013-11-08 11:49:14,413 DEBUG [AIF]: HFM Load command:
    %EPM_ORACLE_HOME%/products/FinancialDataQuality/bin/HFM_LOAD.vbs "22" "a9E3uvSJNhkFhEQTmuUFFUElfdQgKJKHrb1EsjRsL6yZJlXsOFcVPbGWHhpOQzl9zvHoo3s%2Bdq6R4yJhp0GMNWIKMTcizp3%2F8HASkA7rVufXDWEpAKAK%2BvcFmj4zLTI3rtrKHlVEYrOLMY453J2lXk6Cy771mNSD8X114CqaWSdUKGbKTRGNpgE3BfRGlEd1wZ3cra4ee0jUbT2aTaiqSN26oVe6dyxM3zolc%2BOPkjiDNk1MqwNr43tT3JsZz4qEQGF9d39DRN3CDjUuZRPt4SEKSSL35upncRJiw2uBOtV%2FvSuGLNpZ2J79v1%2Ba1Oo9c4Xhig7SFYbE6Jwk1yXRJLTSw0DKFu%2FEpcdjpOnx%2F6YawMBNIa5iu5L637S91jT1Xd3EGmxZFq%2Bi6bHdCJAC8g%3D%3D" "C%3A%5COracle%5CMiddleware%5Cuser_projects%5Cepmsystem1" "%25EPM_ORACLE_HOME%25%2F..%2Fjdk160_35"
    Is there anywhere we need to set EPM Home and also let me know what is PSU1 ?
    Thanks in advance.
    Praneeth

  • Reading Xml file from clob column in the staging table

    Hi,
    I am trying to poll the staging table with the database adapter which has CLOB column type containing XML file. How do I extract the XML file from CLOB and map the fields to the another variable with definite schema.
    Thanks,
    Edited by: chaitu123 on Sep 20, 2009 8:16 AM

    1) when you create DBAdapter on a table which having the clob column watch closely the created xsd for the DBAdapter the clob cloumn element should be a String data type
    2) create xsd for Xml File and create variable for the xsd element
    3) use ora:parseEscapedXML("yourDBAdapterclobElement") to XmlFileVarilable
    Krishna

  • Track table changes at Schema Level

    Hi,
    I have 40 tables in a schema and correponding 40 history tables in the same schema. If any row is being deleted (or updated) in the parent table, i want to insert the data in history tables.
    Do i need to write 40 separate trigger for this or do i have any option to write a single trigger on the schema so that the changes on the parent table can be inserted in history table.

    Hi,
    I was trying to execute the query for workspace manager,
    BEGIN
    DBMS_WM.GrantSystemPriv (
    'ACCESS_ANY_WORKSPACE, MERGE_ANY_WORKSPACE, '
    || 'CREATE_ANY_WORKSPACE, REMOVE_ANY_WORKSPACE, '
    || 'ROLLBACK_ANY_WORKSPACE',
    'WSMGMT',
    'YES'
    END;
    (which was mentioned in the link http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4529781014729)
    I am getting below error.
    ORA-06550: line 2, column 4:
    PLS-00201: identifier 'DBMS_WM.GRANTSYSTEMPRIV' must be declared
    ORA-06550: line 2, column 4:
    PL/SQL: Statement ignored
    How to declare that or to clear the error.

  • DB Tool List Table: How to access tables which are in different SQL database ?

    Hi, All,
    I'm working on a database application (SQL server) and is evaluating the NI DB Tool kit for this project.
    One of the requirement is that I need to access tables which are in two different database
    (say Table A in DB 1 and Table B in DB 2).
    Our IT guys has linked Table A in DB1 to DB 2 and I verfied this when I use the SQL server managment studio.
    When DB 2 tables are populated, Table A from DB1 is also there. I can also do the same thing using MS Access.
    Table A in DB1 is avalaible to me enven though I only connect to DB 2.
    Here comes the problem.
    When I use DB Tool List Table.vi to access DB2, it does NOT list Table A in DB1. It only list the tables in
    the database (DB2) which I make connection to (using DB Tool Open Connection.vi with a file DSN)
    So my work around right now is to open two seperate connection to DB1 and DB2. However, this approach
    obviously creates a problem when I have to access seperate database constantly in my application.
    What would be a solution to this ? I've search the forum but only see one post that's somewhat related to
    my question. (And it was posted on 2004) Perhaps I need to alter the code in the orignial VI (DB Tool List Table.vi)??
    My IT guy told me he has not encountered this scenario since he writes codes in other enviroment such as
    VB and others, and he's always been successful by linking tables to different database. 
    I hope my question is sound and clear since I really don't know much about database terminology.
    Any comment/suggestion is much appreciated !!!
    Thanks
    Chad
    Solved!
    Go to Solution.

    To josborne:
    To answer your question:
    - Are the two databases contained on the same SQL Server instance? 
    Or are the databases on separate instances?  I assume they are on
    separate servers, otherwise this wouldn't really be an issue.  But its
    good to know because it will affect how you build your SQL statements.
    Yes they are on separate instances. 
    - Ask your IT people specifically how they "linked Table A in DB1 to
    DB 2".  I assume they used "linked servers". 
    Maybe I used the wrong terminology "linked." They created a "View of Table A (DB1)" in DB2 using the management studio.
    Here is a screen shot of that. As you can see, dbo.VISUAL_WORK_ORDER is seen under LABVIEW database in the management studio.
    I also see the same table when I make connection to database using MS Access.
    Could you elaborate on "configure your SQL statement correctly" =) ? The purpose of using LabView's took kit is so that I can do
    minimum SQL statements. Are you talking about modifying LabView's native VI (DB Tool List Table.vi) ?
    Thanks for the information. SQL is just something new to me.

Maybe you are looking for

  • I can no longer sync my photos since ios6. Any advise?

    Since upgrading to iOS6 on my iPhone 4S I can no longer sync my photos in iPhoto via iTunes. The sync photos is checked but only the sync all option is available and the sync fails at the photo step with an unknown error. All apps and OS's are curren

  • CS4 exporting magic?

    I hope someone who knows a lot more than I do about codecs and compression can answer this question.  I imported footage from my Canon XLH1 (1440 x 1080), 29.97 fps, progressive mode, tape media, MPEG-2 compression on the tape with a 4:2:0 color spac

  • Similar to multiple x sessions

    I know that on linux you can initiate multiple x sessions so that when you connect to the computer using VNC and use the mouse and keyboard without interfering with what the user on the computer does. Is there some way to start a new session on the m

  • Auditing tables - standard way of recording changes to db

    Hi, This is a general query that i would like to know how others handle. We run a forms based system and use within the database a createby, createdate, amendby, amenddate column policy were on a pre insert, pre update the fields are populated. But o

  • This is getting extremely flustrating an flash player trial not working

    im about ready to delete flash player because it does not work and this writing about it to you is a bunch of crap