ODS data Load from multiple sources.... Help

Hello every one,
I have a situation:
I have a ODS where out of 150 fields 100 are coming from flat files, 30 from other ODSs and 20 are from Master Data Tables (some are attributes of Master Data InfoObjects).
I am thinking i can do the following. Please advise if this is fine or if there is any other way of doing it for better performance e.t.c
1 Load the Flat File Fields based on Flat File DataSource/InfoSource/UpdateRules.
2. Load the other ODS fields/data in the Start Routine of the above Update Rules ???
3. Load the Master Data objects through Master Data InfoSources
Please confirm if i can do this. Also is there any order that i have to maintain for the same..... and finally
When i declare the Masterdata Object (say 0MATERIAL) as one of my data object in the ODS and some the fields as the attributes of this 0MATERIAL, if some one is already loading data into the 0MATERIAL, should i load again or not, because i remember Master Data stores data in separate tables.. ... Please remove my confusion )
All useful answers will be given full points.
Thanks a lot ...

Hi:
<i>1 Load the Flat File Fields based on Flat File DataSource/InfoSource/UpdateRules.
2. Load the other ODS fields/data in the Start Routine of the above Update Rules ???
3. Load the Master Data objects through Master Data InfoSources</i>
THe third one is only possible if those InfoObjects are InfoProviders. This is becasue if the I-Objects are direct update, you cannot use those infosources for any other laods but InfoObjects.
<i>Please confirm if i can do this. Also is there any order that i have to maintain for the same.....</i>
Order is needed only if th ekey figures are Overwrite and Business Requests the order.
<i>and finally
When i declare the Masterdata Object (say 0MATERIAL) as one of my data object in the ODS and some the fields as the attributes of this 0MATERIAL, if some one is already loading data into the 0MATERIAL, should i load again or not, because i remember Master Data stores data in separate tables.. ... Please remove my confusion )</i>
Didn't understand this part clearly.
DO you have the Attributes of 0material in ODS and populating thme in ODS?
Ram C.

Similar Messages

  • How to load from multiple sources to multiple destination dynamically

    I have more than hundred tables  in a different server (lets say on a sql server 1). I have to perform load,by getting change records copy of those hundred tables with the some condition in to sql server 2 destination. I know how to perform data
    flow task in SSIS to extract data from a source and load it in a destination. With more than hundred tables, I would need to create more than hundred data flow tasks which is very time consuming. So I have heard about from source to destination dynamically
    by looping through and creating variables. Now, how do I do this? Remeber, those hundred tables do having similar column names in both soruce and destination server. How can I perform this initial load faster without using multiple data flow task in SSIS.
    Please, help! Thank you!

    Ok , if the schema of both the source and destination is same , you can just perform an import and export tool and then it can be saved as an SSIS package. If this needs to be separately developed in an SSIS package, the logic can be :
    1 - Get the list of tables using sysobjects or metadata table where all the table list is maintained.
    2 - Loop through those tables using for each loop.
    3 - Inside the for each loop , use a script to access the source connection ( OLEDB source table name) and the destination connection (OLEDB destination table name).
    4 - Assign the source table connectoin dynamically to the variable fecthed from for each loop above.
    5 - You can also use expressions for the OLEDB source table name and OLEDB destination name inside the data flow task.
    6 - At the end the data flow task which is explained above will be used.
    Script task and the data flow task will be used inside the for each loop , above which all the objects list will be fetched using execute sql task using a SELECT and assigned to Object variable.
    Simplest will be Import and Export.
    Happy to help! Thanks. Regards and good Wishes, Deepak. http://deepaksqlmsbusinessintelligence.blogspot.com/

  • Error : Data load from relation source (SQL Server 2005) to Essbase Cube

    Hi All,
    I am looking help from you. I am trying to load data from SQLServer 2005 table to Essbase Cube using IKM SQL to Hyperion Essbase (Metadata) Modules.
    I am getting below error. Let me know if i am missing some things.
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 61, in ?
    com.hyperion.odi.essbase.ODIEssbaseException: Invalid value specified [RULES_FILE] for Load option [null]
         at com.hyperion.odi.essbase.ODIEssbaseMetaWriter.validateLoadOptions(Unknown Source)
         at com.hyperion.odi.essbase.AbstractEssbaseWriter.beginLoad(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
         at java.lang.reflect.Method.invoke(Unknown Source)
         at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
         at org.python.core.PyMethod.__call__(PyMethod.java)
         at org.python.core.PyObject.__call__(PyObject.java)
         at org.python.core.PyInstance.invoke(PyInstance.java)
         at org.python.pycode._pyx3.f$0(<string>:61)
         at org.python.pycode._pyx3.call_function(<string>)
         at org.python.core.PyTableCode.call(PyTableCode.java)
         at org.python.core.PyCode.call(PyCode.java)
         at org.python.core.Py.runCode(Py.java)
         at org.python.core.Py.exec(Py.java)
         at org.python.util.PythonInterpreter.exec(PythonInterpreter.java)
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:144)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Invalid value specified [RULES_FILE] for Load option [null]
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)

    ODI Step: Prepare for Loading Step:
    from java.util import HashMap
    from java.lang import Boolean
    from java.lang import Integer
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    # Target planning connection properties
    serverName = "HCDCD-HYPDB01"
    userName = "admin"
    password = "<@=snpRef.getInfo("DEST_PASS") @>"
    application = "BUDGET01"
    database = "PLAN1"
    portStr = "1423"
    srvportParts = serverName.split(':',2)
    srvStr = srvportParts[0]
    if(len(srvportParts) > 1):
    portStr = srvportParts[1]
    # Put the connection properites and initialize the essbase loader
    targetProps = HashMap()
    targetProps.put(ODIConstants.SERVER,srvStr)
    targetProps.put(ODIConstants.PORT,portStr)
    targetProps.put(ODIConstants.USER,userName)
    targetProps.put(ODIConstants.PASSWORD,password)
    targetProps.put(ODIConstants.APPLICATION_NAME,application)
    targetProps.put(ODIConstants.DATABASE_NAME,database)
    targetProps.put(ODIConstants.WRITER_TYPE,ODIConstants.DATA_WRITER)
    print "Initalizing the essbase wrapper and connecting"
    pWriter = HypAppConnectionFactory.getAppWriter(HypAppConnectionFactory.APP_ESSBASE, targetProps);
    tableName = "BUDGET01_PLAN1"
    rulesFile = r"ActLd"
    ruleSeparator = "Tab"
    clearDatabase = "None"
    calcScript = r""
    maxErrors = 1
    logErrors = 1
    errFileName = r"E:\OraHome_ODI\Error\Budget01Plan1.err"
    logEnabled = 1
    logFileName = r"E:\OraHome_ODI\Error\Budget01Plan1.log"
    errColDelimiter = r","
    errRowDelimiter = r"\r\n"
    errTextDelimiter = r"'"
    logHeader = 1
    commitInterval = 1000
    calcOnly = 0
    preMaxlScript = r""
    postMaxlScript = r""
    abortOnPreMaxlError = 1
    # set the load options
    loadOptions = HashMap()
    loadOptions.put(ODIConstants.CLEAR_DATABASE, clearDatabase)
    loadOptions.put(ODIConstants.CALCULATION_SCRIPT, calcScript)
    loadOptions.put(ODIConstants.RULES_FILE, rulesFile)
    loadOptions.put(ODIConstants.LOG_ENABLED, Boolean(logEnabled))
    loadOptions.put(ODIConstants.LOG_FILE_NAME, logFileName)
    loadOptions.put(ODIConstants.MAXIMUM_ERRORS_ALLOWED, Integer(maxErrors))
    loadOptions.put(ODIConstants.LOG_ERRORS, Boolean(logErrors))
    loadOptions.put(ODIConstants.ERROR_LOG_FILENAME, errFileName)
    loadOptions.put(ODIConstants.RULE_SEPARATOR, ruleSeparator)
    loadOptions.put(ODIConstants.ERR_COL_DELIMITER, errColDelimiter)
    loadOptions.put(ODIConstants.ERR_ROW_DELIMITER, errRowDelimiter)
    loadOptions.put(ODIConstants.ERR_TEXT_DELIMITER, errTextDelimiter)
    loadOptions.put(ODIConstants.ERR_LOG_HEADER_ROW, Boolean(logHeader))
    loadOptions.put(ODIConstants.COMMIT_INTERVAL, Integer(commitInterval))
    loadOptions.put(ODIConstants.RUN_CALC_SCRIPT_ONLY,Boolean(calcOnly))
    loadOptions.put(ODIConstants.PRE_LOAD_MAXL_SCRIPT,preMaxlScript)
    loadOptions.put(ODIConstants.POST_LOAD_MAXL_SCRIPT,postMaxlScript)
    loadOptions.put(ODIConstants.ABORT_ON_PRE_MAXL_ERROR,Boolean(abortOnPreMaxlError))
    #call begin load
    pWriter.beginLoad(loadOptions)
    Execution step from opartor:
    Read rows: 0
    Insert/Delete/updat rows: 0
    ODI Step: Load Data Into Essbase
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    from java.lang import Class
    from java.lang import Boolean
    from java.sql import *
    from java.util import HashMap
    # Get the select statement on the staging area:
    sql= """select C1_ACCOUNT "Account",C3_TIMEPERIOD "TimePeriod",C4_LOBS "LOBs",C5_TREATY "Treaty",C6_SCENARIO "Scenario",C7_VERSION "Version",C8_CURRENCY "Currency",C9_YEAR "Year",C10_DEPARTMENT "Department",C11_ENTITY "Entity",C2_DIVLOC "DivLoc",C12_DATA "Data" from OdiMapping.dbo.C$_0BUDGET01_PLAN1Data where      (1=1) """
    srcCx = odiRef.getJDBCConnection("SRC")
    stmt = srcCx.createStatement()
    srcFetchSize=30
    stmt.setFetchSize(srcFetchSize)
    rs = stmt.executeQuery(sql)
    #load the data
    stats = pWriter.loadData(rs)
    #close the database result set, connection
    rs.close()
    stmt.close()
    ODI Step: Report Statistics
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 2, in ?
    Essbase Writer Load Summary:
         Number of rows successfully processed: 1
         Number of rows rejected: 0
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.h.y(h.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Where am i doing wronge for data load in Essbase? How I will find correct information to load data into Essbase?

  • Master data load from 2 source system

    Hi all,
            I am working on a project which has 2 source systems.
    Now I have to load master data from 2 source systems without making 0LOGSYS as compunding
    attribute. Because all the objects which i want to use compunding attribute are reference
    objects for lot of other info objects. So, i dont want to change the business content
    objects structure. Please guide me
    Thanks

    Hi,
    I cube there is nothing to do with compounding attribute, it will be handled in MD IO level.  As your requirement to separate the transaction happen in different source system then make add another info object as Zsource and make it as nav. attribute then design your report based on nav. attribute.
    Note that this separation is only possible in Material Related transaction not for others.
    Thanks
    BVR

  • How to Load Master Data Text  from Multiple Source Systems

    Situation:  Loading vendor master (text) from two systems.  Some of the vendor keys/number are the same and so is the description.  I understand that if I was loading Attributes, I could create another attribute such a source system id and compound it to 0Vendor.  However, I don't believe you can do the same when loading text. 
    Question:  So, how can I load duplicate vendor keys and description into BW using the *_TEXT datasource so they do not overwrite?
    Thanks!

    Hi,
    Don't doubt, it'll work!
    Sys ID as compound to 0Vendor.
    If you doubt about predefined structure of the *_TEXT datasource and direct infosource, then use a flexible infosource. Maybe you'll have to derive the sys ID in a routine or a formula.
    Best regards,
    Eugene

  • Can we merge data from multiple sources in Hyperion Interactive Reporting ?

    Hi Experts,
    Can we merge data from multiple sources in Hyperion Interactive Reporting ?Example can we have a report based on DB2
    Oracle,
    Informix and Multidiemnsional Databases like DB2,MSOLAP,ESSBASE?
    Thanks,
    V

    Yes, Each have their own Query and have some common dimension for the Results Sections to be joined together in a final query.
    look in help for Creating Local Joins

  • Open Hub - Data from multiple sources to single target

    Hello,
    I am using Open Hub to send the data from SAP BI to Flat files. I have a requirement where I want to create a single destination from multiple sources. In other words in BI we have different tables for attributes and text. I would like to combine the data from attributes  and text into a single file. For eg. I want to have material attributes and text in the same single file as output.
    Is this possible in Open Hub? If yes could you please help me to understand the process.
    Thanks,
    KK

    Hi,
    1. Create the Info Spoke and activate it
    2. Change it and go to transformation
    3. Check the box InfoSpoke with Transf. Using BAdI
    4. You are asked if you want to generate the spoke. Say yes & Simply set some texts and activate here, then return.
    5. You can now change the target structure. Simply create a Z structure with all the attributs & text field in it in the SE11 and enter it here.
    6. Double click on BAdI implementation & then double click again on "TRANSFORM" method of the implementation. It will take you to method  
    "IF_EX_OPENHUB_TRANSFORM~TRANSFORM"
    7. Write a code to select & fill the text field & map other filed with the attribute fields.
    Example:
    ZEMPLOYEE_ATTR_STRU - Target Structure for InfoSpoke EMPLOYEE_ATTR
    EMPLOYEE     /BI0/OIEMPLOYEE     NUMC     8     0     Employee
    DATETO     /BI0/OIDATETO     DATS     8     0     Valid to
    DATEFROM     /BI0/OIDATEFROM     DATS     8     0     Valid from
    COMP_CODE     /BI0/OICOMP_CODE     CHAR     4     0     Company code
    CO_MST_AR     /BI0/OICO_MST_AR     CHAR     4     0     Controlling Area of Master Cost Center
    HRPOSITION     /BI0/OIHRPOSITION     NUMC     8     0     Position
    MAST_CCTR     /BI0/OIMAST_CCTR     CHAR     10     0     Master Cost Center
    TXTMD     RSTXTMD     CHAR     40     0     Medium description
    Note: Text and attribute are in same structure.

  • Data deleted in ODS, data coming from the data source : 2LIS_11_VASCL

    Friends,
    I have situation :
    Data deleted in ODS, data coming from the data source : 2LIS_11_VASCL.
    when for the above ods and the data source when trying to delete the request whole data got delted.
    All the data got deleted in the fore ground. no background job has been generated for this.
    I ma really worried abt this issues. can u please tell me what should be the possibilities for this issue.
    Many Thanks
    VSM

    Hi,
    I suppose you want to know the possibilitiy of getting the data.
    If the entire data is being deleted, you can reload the data from source system.
    Load the setup table for your application. Then carry out init request.
    Please note that you would have to take a transaction-free period for carrying out this activity so that no data is missed.
    Once this is done, delta queues will again start filling up.

  • Summing up key figure in Cube - data load from ODS

    Hello Gurus,
    I am doing a data load from ODS to cube.
    The records in ODS are at line-item level, and all needs to be summed up at header level. There is only one key-figure. ODS has header and line-item fields.
    I am loading only header field data ( and not the item-field data) from ODS to Cube.
    I am expecting only-one record in cube ( for all the item-level records) with all the key-figures summed up. But couldn't see that.
    Can anyone please explain how to achieve it.
    I promise to reward points.
    =====
    Example to elaborate my point.
    In ODS
    Header-field  item-field   quantity
    123                301          10
    123                302           20
    123                303           30
    Expected record in Cube
    Header-field       Quantity
       123                    60  
    ====================
    Regards,
    Pramod.

    Hello Oscar and Paolo.
    Thanks for the reply. I am using BW 7.0
    Paolo suggested:
    >>If you don't add item number to cube and put quantity as adition in update rules >>from ODS to cube it works.
    I did that still I get 3 records in cube.
    Oscar Suggested:
    >>What kind of aggregate do you have for your key figure in update rules (update >>or no change)?
    It is "summation". And it cannot be changed. (Or at least I do not know how to change it.)
    There are other dimensions in the cube - which corresponds to the field(s) in ODS.
    But, I just mentioned these two (i.e. header and item-level) for simplicity.
    Can you please help?
    Thank you.
    Pramod.

  • Error is data loading from 3rd party source system with DBCONNECT

    Hi,
    We have just finished an upgrade of SAP BW 3.10 to SAP NW 7.0 EHP1.
    After the upgrade, we are facing a problem with data loads from a third party Oracle source system using DBConnect.
    The connection is working OK and we can see the tables in the source system. But we cannot load the data.
    The error in the monitor is as follows:
    'Error message from the source system
    Diagnosis
    An error occurred in the source system.
    System Response
    Caller 09 contains an error message.
    Further analysis:
    The error occurred in Extractor .
    Refer to the error message.'
    But, unfortunately, the error message has no further information.
    If we look at the job log in sm37, the job finished with the following log -                                                                               
    27.10.2009 12:14:19 Job started                                                                                00           516          S 
    27.10.2009 12:14:19 Step 001 started (program RSBATCH1, variant &0000000000119, user ID RXSAHA)                    00           550          S 
    27.10.2009 12:14:23 Start InfoPackage ZPAK_4FMNJ2ZHNNXC6HT3A2TYAAFXG                                              RSM1          797          S 
    27.10.2009 12:14:24 Element NOAUTHORITYCHECK is not available in the container                                     OL           356          S 
    27.10.2009 12:14:24 InfoPackage ZPAK_4FMNJ2ZHNNXC6HT3A2TYAAFXG created request REQU_4FMXSQ6TLSK5CYLXPBOGKF31G     RSM1          796          S 
    27.10.2009 12:14:24 Job finished                                                                                00           517          S 
    In a BW 3.10 system, there is no  message related to element NOAUTHORITYCHECK. So, I am wondering if this is something new in NW 7.0.
    Thanks in advance,
    Rajib

    There will be three things to get the errors like this
    1.RFC CONNECTION FAILED
    2.CHECK THE SOURCE SYSTEM
    3.CHECK IT OUT WITH Oracle Consultants WEATHER THEY ARE FILLING UP THE LOADS.TELL THEM TO STOP
    4.CHECK I DOC PROCESSING
    5.FINALLY MEMORY ISSUES.
    6.CATCH THE DATA SOURCE FIRST CHANGE IT AND THEN ACTIVATE AND RUN THE LOAD
    7.LAST IS MEMORY ISSUE.
    and also Check the RFC connection in SM59 If  it is ok then
    check the SAP note : 692195 for authorization
    Santosh

  • How can i import the data from multiple sources into single rpd in obiee11g

    how can i import the data from multiple sources into single rpd in obiee11g

    Hi,
    to import from multiple data sources, first configure ODBC connections for respective data sources. then you can import data from multiple data sources. When you import the data, a connection pool will create automatically.
    tnx

  • Direct vs Indirect data load from ODS - InfoCube

    We know that there are two options to load data from ODS to InfoCube:
    1. Direct load:
    If the ODS setting is checked with the option of "Update data targets from ODS object automatically", then when loading data to ODS, then the data also can be loaded to the InfoCube, in this way, only one InfoPackage is needed to load data to the ODS, then the InfoCube can be filled with data automatically.
    2. Indirect load:
    If the ODS setting is NOT checked with the option of "Update data targets from ODS object automatically", then not only the InfoPackage to load the data to the ODS is needed, but also the InfoPackage to load data from the exported datasource of the ODS to the InfoCube is needed to be created in order to load the data from ODS to the InfoCube.
    I wonder what are the pro and con between the above two loading methods from ODS to InfoCube.
    Welcome to every BW expert inputs!

    HI Kevin,
    Direct Loads or rather Automated Loads are usually used where u need to load data automatically into the final data target. And it has no dependencies.This process has less of maintenance involved and u need to execute a single infopackage to send data to the final data target.
    But Indirect Loads are usually used when u have a number of dependencies and this ODs object is one of the object in a process chain. So if u require the this ODS data load wait till some other event has been executed , then indirect loads are used through process chains.
    Regards,
    JAsprit

  • Any Tutorial / Sample to create Single PDF from multiple source files using PDF assembler in a watched folder process.

    Any Tutorial / Sample to create Single PDF from multiple source files using PDF assembler in a watched folder process. I have a client application which will prepare number of source files and some meta data information (in .XML) which will be used in header/footer. Is it possible to put a run time generated DDX file in the watch folder and use it in Process. If possible how can I pass the file names in the DDX. Any sample Process will be very helpful.

    If possible, make use of Assembler API in your client application instead of doing this using watched folder. Here are the Assembler samples :  LiveCycle ES2.5 * Programming with LiveCycle ES2.5
    Watched folder can accept zip files (sample : Configuring a watched folder to handle multiple input files and write results to a single folder | Adobe LiveCycle Blog ). You can also use execute script to create the DDX at runtime : LiveCycle ES2 * Application Development Using LiveCycle Workbench ES2
    Thanks
    Wasil

  • Number of parallel process definition during data load from R/3 to BI

    Dear Friends,
    We are using Bi7.00. We have a requirement in which i should increase the number of parallel process during data load from R/3 to BI.  I want to modify this for a particular data source and check.Can experts provide helpful answers for the following question.
    1) When load is taking place or have taken place, where can we see how many parallel process that particular load has taken.
    2) Where should i change the setting for the number of parallel process for data load (from R/3 to BI) and not within BI.
    3) How system works and what will be net result of increasing or decreasing the number of parallel process.
    Expecting Experts help.
    Regards,
    M.M

    Dear Des Gallagher,
    Thank you very much for the useful information provided. The following was my observation.
    From the posts in this forum, i was given to understand that the setting for specific data source can be done in the infopackage and DTP level, i carried out the same and found that there is no change in the load, i.e., system by default takes only one parallel process even though i maintained 6.
    Can you kindly explain about the above mentioned point. i.e.,
    1) Even though the value is maintained in the infopackage level , will system consider it or not. -> if not then from which transaction system is able to derive the 1 parallel process.
    Actually we wanted to increase the package size but we failed because i could not understand what values have to be maintained  -> can you explain in detail
    Can you calrify my doubt and provide solution?
    Regards,
    M.M

  • Modeling the Data model when multiple source systems

    Hi gurus,
    i have a scenario where i am getting data from multiple source systems all R/3 but from different locatins.
    my doubt is if i have to do reporting on the data available.
    1. do i have to build seperate infocubes for each source systems or build seperate data designs for each source system .
    2. How can i consolidate the different source sytems data into one and report sensibly as well as without loosing the source system identification.
    thanks and regards
    Neel

    Hi all,
    thanks for your focus, ya i am also thinking of have a flexible solution where in i can do it easily as well as we don't have much of complexcity, but wanted to try different options as well so that we can go for best one.
    I thought of multiprovider at first when RafB suggested, and i agree with you as well Lilly, that data volume will be a problem, so keeping all this in view i want to build a solution where in it will be sensible as well as not confusing in reports ( i mean clear and readable reports)
    [email protected]
    please kindly forward any documents which might be helpful for me in this scenario
    thanks and regards
    neel
    Message was edited by: Neel Kamal

Maybe you are looking for