Loading Data to Multiple Targets in BI

Hi Experts,
I have one doubt regarding data load to multiple targets in BI. I explain clearly- the scenario is to load the data coming from one source system to multiple Data Targets in BI. Like in BW, we will just create multiple update rules from InfoSource to different targets. In InfoPackage maintenance screen, under  Data Targets Tab, we will select the respective targets which we want to load and we run the InfoPackage, it will updates the data to all the selected targets in the infopacakge.
But in BI, how we will implement this scenario, bcoz here we need to create the individual DTPs, and there is no options to load the data simultenaously to multiple targets.
So, is there any solution to implement this scenario in BI, plz expalin.
Thnaks in Advance
Ragards
Ramakrishna Kamurthy

Hi Dennis,
No worries at all. I've been trying different approaches and strangely it does seem to load data packages faster when going via an InfoSource. (I don't understand why). However, it doesn't want to do it parallel.
Whereas when I went loaded direct from the DataSource to DataSource, it processed data packages twice as slow but three at a time. The result being without the InfoSource was faster. This can be seen in the DTP Process Monitor and in SM50.
Both DTPs had the default setting in Settings for Batch Manager of 3 parallel processes.
Our batch queues in SM50 have not been blocked with other processes.
Has anyone else had problems with parallel processes when loading via an InfoSource?
Thanks
Adrian
P.S.
I think I've discovered two cases where InfoSource may bring performance improvements:
Filtering Records
Transformation A includes the common and more simplistic transformations. e.g Sets a flag "Relevant" for certain conditions being met.
Transformation B includes the complex transformations. At the beginning of B, you include a Start Routine that filters out records not marked "Relevant". That way it only does the complex work on relevant records.
Time Conversion
If your Data Source has Fiscal Period, but you wish your DataTarget to have Calendar Month, you need to write a routine to covert if you extract direct from DataSource to DataTarget.
Whereas, if the Fiscal Period is passed to an InfoSource, you can use Time conversions or formulas to convert Fiscal Period to Calendar Month in a transformation between InfoSource and DataSource
Edited by: Adrian Bell on Jul 31, 2008 9:33 AM

Similar Messages

  • Loading data from multiple files to multiple tables

    How should I approach on creating SSIS package to load data from multiple files to multiple tables. Also, Files will have data which might overlap so I might have to create stored procedure for it. Ex. 1st day file -data from au.1 - aug 10 and 2nd day
    file might have data from aug.5 to aug 15.  So I might have to look for max and min date and truncate table with in that date range.

    thats ok. ForEachLoop would be able to iterate through the files. You can declare a variable inside loop to capture the filenames. Choose fully qualified as the option in loop
    Then inside loop
    1. Add execute sql task to delete overlapping data from the table. One question here is where will you get date from? Does it come inside filename?
    2. Add a data flow task with file source pointing to file .For this add a suitable connection manager (Excel/Flat file etc) and map the connection string property to filename variable using expressions
    3. Add a OLEDB Destination to point to table. You can use table or view from variable - fast load option and map to variable to make tablename dynamic and just set corresponding value for the variable to get correct tablename
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Loading data into multiple tables - Bulk collect or regular Fetch

    I have a procedure to load data from one source table into eight different destination tables. The 8 tables have some of the columns of the source table with a common key.
    I have run into a couple of problems and have a few questions where I would like to seek advice:
    1.) Procedure with and without the BULK COLLECT clause took the same time for 100,000 records. I thought I would see improvement in performance when I include BULK COLLECT with LIMIT.
    2.) Updating the Load_Flag in source_table happens only for few records and not all. I had expected all records to be updated
    3.) Are there other suggestions to improve the performance? or could you provide links to other posts or articles on the web that will help me improve the code?
    Notes:
    1.) 8 Destination tables have at least 2 Million records each, have multiple indexes and are accessed by application in Production
    2.) There is an initial load of 1 Million rows with a subsequent daily load of 10,000 rows. Daily load will have updates for existing rows (not shown in code structure below)
    The structure of the procedure is as follows
    Declare
    dest_type is table of source_table%ROWTYPE;
    dest_tab dest_type ;
    iCount NUMBER;
    cursor source_cur is select * from source_table FOR UPDATE OF load_flag;
    BEGIN
    OPEN source_cur;
    LOOP
    FETCH source_cur -- BULK COLLECT
    INTO dest_tab -- LIMIT 1000
    EXIT WHEN source_cur%NOTFOUND;
    FOR i in dest_tab.FIRST .. dest_tab.LAST LOOP
    <Insert into app_tab1 values key, col12, col23, col34 ;>
    <Insert into app_tab2 values key, col15, col29, col31 ;>
    <Insert into app_tab3 values key, col52, col93, col56 ;>
    UPDATE source_table SET load_flag = 'Y' WHERE CURRENT OF source_cur ;
    iCount := iCount + 1 ;
    IF iCount = 1000 THEN
    COMMIT ;
    iCount := 0 ;
    END IF;
    END LOOP;
    END LOOP ;
         COMMIT ;
    END ;
    Edited by: user11368240 on Jul 14, 2009 11:08 AM

    Assuming you are on 10g or later, the PL/SQL compiler generates the bulk fetch for you automatically, so your code is the same as (untested):
    DECLARE
        iCount NUMBER;
        CURSOR source_cur is select * from source_table FOR UPDATE OF load_flag;
    BEGIN
        OPEN source_cur;
        FOR r IN source_cur
        LOOP
            <Insert into app_tab1 values key, col12, col23, col34 ;>
            <Insert into app_tab2 values key, col15, col29, col31 ;>
            <Insert into app_tab3 values key, col52, col93, col56 ;>
            UPDATE source_table SET load_flag = 'Y' WHERE CURRENT OF source_cur ;
            iCount := iCount + 1 ;
            IF iCount = 1000 THEN
                COMMIT ;
                iCount := 0 ;
            END IF;
            END LOOP;
        COMMIT ;
    END ;However most of the benefit of bulk fetching would come from using the array with a FORALL expression, which the PL/SQL compiler can't automate for you.
    If you are fetching 1000 rows at a time, purely from a code simplification point of view you could lose iCount and the IF...COMMIT...END IF and just commit each time after looping through the 1000-row array.
    However I'm not sure how committing every 1000 rows helps restartability, even if your real code has a WHERE clause in the cursor so that it only selects rows with load_flag = 'N' or whatever. If you are worried that it will roll back all your hard work on failure, why not just commit in your exception handler?

  • Loading data from multiple tables to multiple sheets of excel using SSIS

    I have a requirement in which I want to load data from 13 tables to 13 respective sheets of single excel file using SSIS.
    Can anyone know SSIS logic for developing package for this?

    see similar example here
    http://visakhm.blogspot.in/2013/09/exporting-sqlserver-data-to-multiple.html
    In your case you need to use loop to iterate through tables
    First get list of tables in a object variable created in SSIS using INFORMATION_SCHEMA.TABLES view
    Then add a for each loop based on ADO.NET variable enumerator to iterate through tables and inside loop follow method as in the above link to create the sheet first and populate it.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Loading data into multiple tables from an excel

    Can we load data in to multiple tables at a time from an excel through Utilities? If yes how? Please help me
    Regards,
    Pallavi

    I would imagine that the utilities allow you to insert data from a spreadsheet into 1 and only 1 table.
    You may have to write your own custom data upload using External Tables and a PL/SQL procedure to insert data from 1 spreadsheet into more than 1 table.
    If you need any guidance on doing this let me know and I will happily point you in the right direction.
    Regards
    Duncan

  • Loading data in multiple languages using Import Manager

    Hello Experts,
    I have a description field associated with Main table, and need to load the data in multiple languages.. How to load the data in Multiple languages using Importmanager?
    Thanks in advance..

    Hi Kiran,
    Please do the changes as mentioned in this threads:
    Re: Multi Language Question
    Multilingual field
    Regards,
    ---Satish

  • Business content load data from multiple clients

    Hi Experts,
    we are implementing Business Content for FI-GL. On the system there is already the content enabled and the extraction from one client (let's say Client200). We now have to extract data from Client300. How can we do that?
    Thanks in advance,
    Costi

    You should create another Source system on the other client and copy the transfer rules from the extractor mapping them on the other source system . After this you can create the infopackage and load data
    hope it helps

  • OWB 10R2 XML_ETL_UTILS xml data into multiple targets

    I am having some trouble with loading an XML file correctly into the target tables.
    The XML is looking like this (simplified))
    <DATA>
    <ROW rowid="1">
    <FIELD1>bla1.1</FIELD1>
    <FIELD2>bla1.2</FIELD2>
    <CHILDREN>
    <CHILD id="c1_1">bla_c1.1</CHILD>
    <CHILD id="c1_2">bla_c1.2</CHILD>
    <CHILDREN>
    <PETS>
    <PET id="p1_1">bla_p1.1</PET>
    <PETS>
    </ROW>
    <ROW rowid="2">
    <FIELD1>bla2.1</FIELD1>
    <FIELD2>bla2.2</FIELD2>
    <CHILDREN>
    <CHILD id="c2_1">bla_c2.1</CHILD>
    <CHILDREN>
    <PETS>
    <PET id="p2_1">bla_p2.1</PET>
    <PET id="p2_2">bla_p2.2</PET>
    <PETS>
    </ROW>
    </DATA>
    A have a very simple data model:
    TABLE data (DATAID, FIELD1, FIELD2)
    TABLE datadetails (DETAILID, DATAID, DETAILTYPE, DETAILKEY, DETAILVALUE)
    (Both set to truncate/insert)
    When generating the pluggable mappings using the XML_ETL_UTILS each 'level' is generated into a pluggable map (DATA, ROW, CHILDREN, CHILD, PETS, PET).
    1) The first problem I have was how to keep the relationship between the 2 tables. When inserting data into the 'datadetails' table, the data has to been linked to the correct row of the 'data' table.
    This has been solved (not the best way I think) by manually modify the pluggable mappings CHILDREN, CHILD, PETS and PET to include and pass through the 'rowid' attribute so this 'rowid' can be inserted into the 'datadetails' table when inserting the details.
    -> My guess is that there should be a better way or the XML_ETL_UTILS should be able to generate the code to handle this 'parent<->child' relation preservation. Only I don't know how (yet).
    2) The problem I am facing now (after manually modifying the generated pluggable mappings to pass through the ID of the parent), is that for each parent ALL details of the XML file are inserted and not just the details belonging to that particular parent.
    It seems that when processing the data for the 'datadetails' table, it re-read the whole XML file and looks for all instances of the detail element of the particular pluggable mapping.
    This results in a 'datadetail' table looking like this
    DETAILID DATAID DETAILTYPE DETAILKEY DETAILVALUE
    *1 1 CHILDREN c1_1 bla_c1.1
    *2 1 CHILDREN c1_2 bla_c1.2
    *3 1 PETS p1_1 bla_p1.1
    4 1 CHILDREN c2_1 bla_c2.1
    5 1 PETS p2_1 bla_p2.1
    6 1 PETS p2_2 bla_p2.2
    7 2 CHILDREN c1_1 bla_c1.1
    8 2 CHILDREN c1_2 bla_c1.2
    9 2 PETS p1_1 bla_p1.1
    *10 2 CHILDREN c2_1 bla_c2.1
    *11 2 PETS p2_1 bla_p2.1
    *12 2 PETS p2_2 bla_p2.2
    This is off course not correct. Only the ones marked with a * should be there.
    When trying to add a filter within the pluggable mappings to only look for the elements that have the correct parent (fortunately we do have the parent ID available because of the previously mentioned manual modifications of the mappings), the code that gets generated is not correct because it generated code with incorrect (missing) references to the correct variables and doesn't work.
    -> Any idea of how to fix this?
    => More in general: what is the correct approach with OWB to load XML files that contain multiple rows and rows can have multiple level parent<->child relations? (Very common in XML world). Somewhere after processing we need a relational model to hold this data. Any help, examples, etc. is very much appreciated.
    Don't hesitate to call/e-mail me for questions/remarks, etc.
    Regards,
    Patrick

    Hi Tullio
    es you will have to regenerate the pluggable mappings (rerun the expert). and you can either rebuild your map or synchronize the pluggable mapping operator in the map.
    The error crept in when I introduced the 'create root node' option in the expert, this path worked, just the original did not ..sorry:) The fix should resolve your problem and I will get a full update uploaded.
    Thanks
    David

  • How to load data dynamically into target tables using files as a source

    Hi ,
    My scenario needs a single interface to load the data of 5 different files into five target tables using a single interface. All target tables have the same structure. It is possible to point to variable source files using ODI. But the same approach is not working with Database tables. I am getting errors while trying to make my target /source table as a dynamic one.
    Can anybody suggest anything. The last option would be writing a dynamic PL/SQL block in the KM. Any other suggestions friends ?
    Regards,
    Atish

    After creating a pair of identical source and target tables, I have carried out the following steps:
    I am trying just keeping the target as variable
    a)created a one to one interface,
    b)tested that it is running.
    c)created a variable(type =text),
    d)used the variable as #v_name in the resource of the target table datastore.
    e)in a package used the variable in a set variable step (first step).
    f) used the interface as the second step.
    g)executed the same in my context.
    the <project_code>.variable_name is not getting substituted in the sql_code that is generated by the KM. My KM is SQL Control Append and following is the code that it generates in the Insert into I$ step:
    /* DETECTION_STRATEGY = NOT_EXISTS */
    insert /*+ APPEND */ into HR.I$_JOBS_COPY1
         JOB_ID,
         JOB_TITLE,
         MIN_SALARY,
         MAX_SALARY,
         IND_UPDATE
    select      
         JOBS.JOB_ID     JOB_ID,
         JOBS.JOB_TITLE     JOB_TITLE,
         JOBS.MIN_SALARY     MIN_SALARY,
         JOBS.MAX_SALARY     MAX_SALARY,
         'I' IND_UPDATE
    from     HR.JOBS JOBS
    where     (1=1)
    and not exists (
         select     'X'
         from     HR.#PLAYGROUND."v_tab_name" T
         where     T.JOB_ID     = JOBS.JOB_ID
              and     ((JOBS.JOB_TITLE = T.JOB_TITLE) or (JOBS.JOB_TITLE IS NULL and T.JOB_TITLE IS NULL))
              and     ((JOBS.MIN_SALARY = T.MIN_SALARY) or (JOBS.MIN_SALARY IS NULL and T.MIN_SALARY IS NULL))
              and     ((JOBS.MAX_SALARY = T.MAX_SALARY) or (JOBS.MAX_SALARY IS NULL and T.MAX_SALARY IS NULL))
         )

  • Loading Data from multiple tables into essbase using ODI

    Hi,
    We have a scenario where data comes from multiple tables. I would like to know how will ODI load this data for the right combination of the members

    Hi,
    I take it each data table has a field which maps to the other table. You can just drag the datastores on to the source interface and create a join between the tables.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Loading data into multiple tables using sqlloader

    Hi,
    I am using sql loader to load the data from flat file into the data base
    my file structure is as below
    ====================
    101,john,mobile@@fax@@home@@office@@email,1234@@3425@@1232@@2345@@[email protected],1234.40
    102,smith,mobile@@fax@@home,1234@@345@@234,123.40
    103,adams,fax@@mobile@@office@@others,1234@@1233@@1234@@3456,2345.40
    in file first columns are empno,ename,comm_mode(multiple values terminated by '@@'),comm_no_txt(multiple values terminated by '@@'), sal
    the comm_mode and comm_no_text needs to be inserted into the separate table (emp_comm) like below
    emp
    empno ename sal
    101 john 1234.40
    102 smith 123.40
    103 adams 2345.40
    emp_comm
    empno comm_mode comm_no_text
    101 mobile 1234
    101 fax 3425
    101 home 1232
    101 office 2345
    101 email [email protected]
    102 mobile 1234
    102 fax 345
    102 home 234
    103 fax 1234
    like this needs to insert the data using sql loader
    my table structures
    ===============
    emp
    empno number(5)
    ename varchar2(15)
    sal number(10,2)
    emp_comm
    empno number(5) reference the empno of the emp table
    comm_mode varchar2(10)
    Comm_no_text varchar2(35)
    now i want insert the file data into the specified structues
    please help me out to achieve this using sql loader
    (we are not using external tables for this)
    Thanks & Regards.
    Bala Sake
    Edited by: 954925 on Aug 25, 2012 12:24 AM

    Pl post OS and database details
    You will need to split up the datafile in order to load into separate tables. The process is documented
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/ldr_control_file.htm#autoId72
    HTH
    Srini

  • SQLLOADER PROBLEM IN LOADING DATA TO MULTIPLE TABLES

    My problem is I have to data from a flat file which consists 64 columns and 11040 records into 5 different tables.Other thin is I have to check that only UNIQUE record should goto database and then I have to generate a primary key for the record that came to database.
    So I have written a BEFORE INSERT TRIGGER FOR EACH ROW for all the 5 tables to check uniques of the record arrived.
    Now my problem is SQLLDR is loading only those number of records for all the tables which are in minimum to a table uniquely .i.e.,
    TABLES RECORDS(ORGINALLY)
    TIME 11
    STORES 184
    PROMOTION 20
    PRODUCT 60
    Now it is loadin only 11 records for all the problem
    with regards
    vijayankar

    The easiest thing is to do data manipulation in the database; that's what SQL is good for.
    So load your file into tables without any unique constraints. Then apply unique constraints using the EXCEPTIONS INTO... clause. This will populate your exceptions table with the rowid of all the non-unique rows. You can then decide which rows to zap.
    If you don't already have an exceptions table you'll need to run utlexcpt.sql.
    HTH
    P.S. This isn't the right forum to be posting SQL*Loader enquiries.

  • How to load multiple target tables simultaneously in single interface?

    I have a requirement where I have to load data into two target tables in single interface simultaneously. Reason is to populate parent-child relationship on target side as it is coming from the source side.
    For eg: I have 2 headers and 10 corresponding lines in source. Now I want load 2 headers into T1 and 10 lines into T2 simultaneously.
    Eg. SOURCE_TABLE
    HeaderId HeaderDesc LineId LineDesc
    1 AAA 10 QQQ
    1 AAA 20 WWW
    2 BBB 30 ZZZ
    2 BBB 10 XXX
    TARGET_TABLES:
    TARGET_HEADER
    HeaderId HeaderDesc
    1 AAA
    2 BBB
    TARGET_LINE
    HeaderId LineId LineDesc
    1 10 QQQ
    1 20 WWW
    2 30 ZZZ
    2 10 XXX
    I would appreciate if anyone can provide solution in this scenario.
    Thanks in advance.
    Giri
    Edited by: user10993896 on Apr 13, 2009 2:56 PM
    Edited by: GiriM on Apr 14, 2009 10:47 AM

    Hi Giri,
    Let me try to build an example... If I misunderstood your requirement please, let me know!
    1) Source table Tab_S
    create table Tab_S as (cs1 number, cs2 varchar2(10))
    2) Table Parent (P)
    create table Tab_P as (cp1 number, cp2 varchar2(10))
    3) Table Child (C)
    create table Tab_C as (cc1 number, cc2 varchar2(10), cp1 number)
    4) Function F$_Tab_C (create it in a ODI procedure)
    4.1 - step 1
    Create or Replace
    Function F$_Tab_C (p_cp2 varchar2, p_cc1 number,p_cc2 varchar2, cp_cp1 number) return varchar2 as
    begin
    insert into Tab_C (cc1, cc2, cp1)
    values (p_cc1, p_cc2, p_cp1);
    return p_cp2;
    end;
    associate this step to an procedure option like "Create_Function"
    4.2 - step 1
    Drop Function F$_Tab_C
    associate this step to an procedure option like "Drop_Function"
    4.3 - Step 2
    Disable the FK constraint and parent and child
    associate this step to an procedure option like "Disable_Constraint"
    4.4 - Step 3
    Enable the FK constraint
    associate this step to an procedure option like "Enable_Constraint"
    5) ODI interface:
    Source: Tab_S
    Target: Tab_P
    Mapping:
    cp1 ---> cs1
    cp2 ---> F$_Tab_C(cs2, 123, 'abc', cp1)
    6) ODI Package with all flow:
    6.1 - Drag and drop the procedure and put the options:
    "Create_Function" yes
    "Disable_Constraint" yes
    "Drop_Function" no
    "Enable_Constraint" no
    6.2 - Drag and drop the interface
    6.3 - Drag and drop the procedure (again) and put the options:
    "Create_Function" no
    "Disable_Constraint" no
    "Drop_Function" yes (optional, can let as NO if you wish)
    "Enable_Constraint" yes
    These are the necessary steps.... Maybe there is some syntax error because I build all in a notepad and do not compiled it in the DB. It is just to show you the general idea.
    Maybe you can be a little afraid about disable the FK but it is OK because you can guarantee the relationship by logic (funcion).
    The only point is that you must be the only one working at the target tables during the process.
    Make any sense in your case?

  • Multiple flat file in and multiple target tables

    Hi,
    How can we have multiple flat file into multiple targets.
    I am trying to load data from multiple flat files into respective tables. But it gives error like
    VLD-2411: Cannot handle two file structures
    Make sure that only one file structure is used in a SQL*Loader map
    Can anyone help.
    Regards
    Rakesh Kumar

    I donot thing in one mapping you can take multiple sqlloader file.
    If want to load data form multiple file use External table.

  • Loading data from Oracle to essbase using ODI errored

    Hi I am trying to load data from orale table to essbase. But i am getting the following error.
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 23, in ?
    com.hyperion.odi.essbase.ODIEssbaseException: Missing standard dimension column for data load
         at com.hyperion.odi.essbase.ODIEssbaseDataWriter.loadData(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
         at org.python.core.PyMethod.__call__(PyMethod.java)
         at org.python.core.PyObject.__call__(PyObject.java)
         at org.python.core.PyInstance.invoke(PyInstance.java)
         at org.python.pycode._pyx3.f$0(<string>:23)
         at org.python.pycode._pyx3.call_function(<string>)
         at org.python.core.PyTableCode.call(PyTableCode.java)
         at org.python.core.PyCode.call(PyCode.java)
         at org.python.core.Py.runCode(Py.java)
         at org.python.core.Py.exec(Py.java)
         at org.python.util.PythonInterpreter.exec(PythonInterpreter.java)
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:144)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.h.y(h.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Caused by: com.hyperion.odi.essbase.ODIEssbaseException: Missing standard dimension column for data load
         at com.hyperion.odi.essbase.ODIEssbaseDataWriter.validateColumns(Unknown Source)
         ... 32 more
    com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Missing standard dimension column for data load
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.h.y(h.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Also, i have the following doubts
    1. What should be the LKM set for the soruce. Is it LKM SQL TO SQL
    2. I have the standard 6 dimensions in the oracle table but Currncy and hsprates in essbase. So how can i ignore these two dimensions while loading data. Or is there a way where i can have default options set for these two dimensions.
    3.Can i Load data from multiple tables to essbase. If yes please let me know the procedure.

    I have similar error to Sravan -- going from MS SQL to SunOpsys Staging to Essbase. It is telling me missing standard dimension, however I have all accounted for:
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last): File "<string>", line 23, in ? com.hyperion.odi.essbase.ODIEssbaseException: Missing standard dimension column for data load
         at com.hyperion.odi.essbase.ODIEssbaseDataWriter.loadData(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
    I'm using multiple time period inputs -- BegBalance,Jul,Aug,Sep,Oct,Nov,Dec,Jan,Feb,Mar,Apr,May,Jun (target has all of those in place of Time Periods)
    I'm using hard coded input mapping for Metric, Scenario, Version, HSP_Rates and Currencies. -> 'Amount', 'Actual', 'Final', 'HSP_InputValue','Local' respectively.
    The only thing I can think of is that since I'm loading to each of the months in the Time Periods dimension (the reversal was set up to accomodate that)... and now its somehow still looking for that? Time Periods as a dimension does not show up in the reversal -- only the individual months named above.
    Any ideas on this one??

Maybe you are looking for