Load Data to Hierarchy

Hi,
I have mainted the Cost Center Hierarchy in SAP, I see Hierarchies & Master Data transfared from SAP to BW
I see that The Hierarchy is defined very well
My problem is some values transferred to BW but falls under 'unassigned'
Whay should I check else from Master data loading & Hierarchy load
Thx

aa

Similar Messages

  • Error while loading data into hierarchy

    Hi,
    I tried loading the master data from flat file into a hierarchy but its giving me an error message as invalid entry and says hierarchy doesn't exist.Could anyone please guide me how to proceed.
    Thanks,
    Kuldeep.

    Hi,
    1. create a hierarchy,  go to Transfer structure->Hierarchy structure
    ->create hierarchy, put a check of whether it is sorted or time dependent etc.
    2. Assuming it is a flat file load in csv format,selct upload method as IDOC,after that while loading data through infopackage just check that you have selected an appropriate hierarchy name in hierarchy selection tab
    This will hopefully solve your problem.
    Kind Regards,
    Ray

  • How to load data into Hierarchy

    Hi Experts,
      I have created hierarchy but unable to activate.When ever i click on activate hierarchy it gives me an error as there is no data to load,but when i schedule the data in infopackage i am able to request the data.Even in PSA i can see the data .
    Please guide me how to load the data in hierarchy.
    Thanks & Regards
    Sameer Khan

    Hi Sameer,
    BI 7.0 does'nt support hierarchy datasource.For hierarchy you have to use BW 3.X datasource only..
    You can use the following link and that would help you out.
    http://help.sap.com/saphelp_nw70/helpdata/EN/0e/fd4e3c97f6bb3ee10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/EN/80/1a6729e07211d2acb80000e829fbfe/frameset.htm
    it ....helps ......
    Regards
    chandra sekhar

  • Data Load error in Hierarchy

    Hi ,
    While loading data to hierarchy im getting the below error.
    Parent ID of node ID 00000828 is not the same as the ID of the higher level node
    I have tried to analyze this issue , but not able to find the cause and fix this.Please help me to sort it out.
    Thanks
    Pradeep

    Hi pradeep,
    To load hierarchy you need to load a excel sheet containing a details of parent node id , node id etc..
    so in that excel sheet ID of higher level node should be same as parent node id.
    Please correct it manually in that excel sheet and reload it.
    Thanks and Regards
    Asim

  • Error while loading the master data into hierarchy

    Hi,
    When i am trying to load the master data into a hierarchy, it is showing me the following error.
    "Invalid Entry, hierarchy xxxxxx does not exist".
    Can you please tell me what might be the possible cause for this error.
    Thanks,
    Prateek

    Hello
    Have you created a hierarchy, I mean go to Transfer structure->Hierarchy structure 
    ->create hierarchy, put a check of whether it is sorted or time dependent etc. after that while loading data through infopackage just check that you have selected an appropriate hierarchy name in hierarchy selection tab.
    Hope it will help!!
    Regards,
    Sandeep

  • Aggregating data loaded into different hierarchy levels

    I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
    I read the help in DML Reference of the OLAP Worksheet and it said the follow:
    When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
    LIMIT all_but_q4 TO ALL
    LIMIT all_but_q4 REMOVE 'Q4'
    Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
    How to do it this for more than one dimension?
    Above i wrote my case of study:
    DEFINE T_TIME DIMENSION TEXT
    T_TIME
    200401
    200402
    200403
    200404
    200405
    200406
    200407
    200408
    200409
    200410
    200411
    2004
    200412
    200501
    200502
    200503
    200504
    200505
    200506
    200507
    200508
    200509
    200510
    200511
    2005
    200512
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    2004 NA
    200412 2004
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    2005     NA
    200512 2005
    DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
    EQ -
    aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
    COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 4,00 ---> here its right!! but...
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00 ---> here must be 30,00 not 10,00
    200512 NA
    DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
    T_TIME PRUEBA2_IMPORTE_STORED
    200401 NA
    200402 NA
    200403 NA
    200404 NA
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 NA
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00
    200512 NA
    DEFINE OBJ262568349 AGGMAP
    AGGMAP
    RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
    args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
    AGGINDEX NO
    CACHE NONE
    END
    DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
    T_TIME_AGGRHIER_VSET1 = (H_TIME)
    DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
    T_TIME_AGGRDIM_VSET1 = (2005)
    Regards,
    Mel.

    Mel,
    There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
    1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
    = solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
    2. Data is loaded at both a detail level and it's ancestor, as in your example case.
    = the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
    Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
    To solve your usage case I would suggest a hierarchy that looks more like this:
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    200412 2004
    2004_SELF 2004
    2004 NA
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    200512 2005
    2005_SELF 2005
    2005 NA
    Resulting in the following cube:
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    200412 NA
    2004_SELF NA
    2004 4,00
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    200512 NA
    2005_SELF 10,00
    2005 30,00
    3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
    = this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
    4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
    = often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation.

  • Can not load data into the ods.

    When I was trying to load data into BW3.5 from R3 system, I am getting this message
    "For InfoSource ZSD_A507_011, there is no Hierarchies for master data in source system R3PCLNT300
    "  any idea.  There is no hierarchies for this.

    Hello Mat,
    What are you trying to load??
    If the data source is a hierarchy datasource??
    just check this thread
    There are no hierarchies for this InfoSource in source system
    Thanks
    Ajeet

  • How to load a partial hierarchy

    EPM 10 NW
    While loading 0GL_ACCOUNT not all master data in BW is relevant for my planning in BPC hence i need to load only one branch of the hierarchy.
    As far I can tell we can only filter by an attribute when you are loading from BW INFOOBJECT using Data Manager package. But in our case there is no attribute which can identify all the members i need to load.
    Is there a way to load master data from only a specific branch and not full hierarchy?
    NODE
    NODE1 -> NODE11, NODE12
    NODE2 -> NODE21, NODE22
    In the above tree i only want to extract NODE2 -> NODE21,NODE22    into hierarchy  ignoring NODE1 in BW.
    ~Dilkins

    Hi Andrew!
    I´m not sure if you can explicit just load one branch, but you have some options both in transformation file and in Conversion file.
    If that dont work then I guess one option is to load the whole hierarchy and then build your pown customized hiearchy and use that in the planning sheets.
    For ex if you load the whole hierarchy and then define a Parenth2 hierarchy in BPC and use only Node and Node 2 members in that hierarchy.
    Br
    Patrick

  • Error in loading data into essbase while using Rule file through ODI

    Hi Experts,
    Refering my previous post Error while using Rule file in loading data into Essbase through ODI
    I am facing problem while loading data into Essbase. I am able to load data into Essbase successfully. But when i used Rule file to add values to existing values I am getting error.
    test is my Rule file.
    com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Cannot put olap file object. Essbase Error(1053025): Object [test] already exists and is not locked by user [admin@Native Directory]
    at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
    at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.execInBSFEngine(SnpScriptingInterpretor.java:346)
    at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.exec(SnpScriptingInterpretor.java:170)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java:2458)
    at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:48)
    at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:1)
    at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2906)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2609)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:540)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:453)
    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1740)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:338)
    at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:214)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:272)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:263)
    at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:822)
    at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:123)
    at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:83)
    at java.lang.Thread.run(Thread.java:662)
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    from java.lang import Class
    from java.lang import Boolean
    from java.sql import *
    from java.util import HashMap
    # Get the select statement on the staging area:
    sql= """select C1_HSP_RATES "HSP_Rates",C2_ACCOUNT "Account",C3_PERIOD "Period",C4_YEAR "Year",C5_SCENARIO "Scenario",C6_VERSION "Version",C7_CURRENCY "Currency",C8_ENTITY "Entity",C9_VERTICAL "Vertical",C10_HORIZONTAL "Horizontal",C11_SALES_HIERARICHY "Sales Hierarchy",C12_DATA "Data" from PLANAPP."C$_0HexaApp_PLData" where (1=1) """
    srcCx = odiRef.getJDBCConnection("SRC")
    stmt = srcCx.createStatement()
    srcFetchSize=30
    #stmt.setFetchSize(srcFetchSize)
    stmt.setFetchSize(1)
    print "executing query"
    rs = stmt.executeQuery(sql)
    print "done executing query"
    #load the data
    print "loading data"
    stats = pWriter.loadData(rs)
    print "done loading data"
    #close the database result set, connection
    rs.close()
    stmt.close()
    Please help me on this...
    Thanks & Regards,
    Chinnu

    Hi Priya,
    Thanks for giving reply. I already checked that no lock are available for rule file. I don't know what's the problem. It is working fine without the Rule file, but throwing error only when using rule file.
    Please help on this.
    Thanks,
    Chinnu

  • Error while using Rule file in loading data into Essbase through ODI

    Hi Experts,
    I am facing problem while loading data into Essbase. I am able to load data into Essbase successfully. But when i used Rule fule to add values to existing values am getting error.
    test is my Rule file.
    com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Cannot put olap file object. Essbase Error(1053025): Object [test] already exists and is not locked by user [admin@Native Directory]
         at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.execInBSFEngine(SnpScriptingInterpretor.java:346)
         at com.sunopsis.dwg.codeinterpretor.SnpScriptingInterpretor.exec(SnpScriptingInterpretor.java:170)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java:2458)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:48)
         at oracle.odi.runtime.agent.execution.cmd.ScriptingExecutor.execute(ScriptingExecutor.java:1)
         at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2906)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2609)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:540)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:453)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:1740)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:338)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:214)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:272)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:263)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:822)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:123)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:83)
         at java.lang.Thread.run(Thread.java:662)
    from com.hyperion.odi.common import ODIConstants
    from com.hyperion.odi.connection import HypAppConnectionFactory
    from java.lang import Class
    from java.lang import Boolean
    from java.sql import *
    from java.util import HashMap
    # Get the select statement on the staging area:
    sql= """select C1_HSP_RATES "HSP_Rates",C2_ACCOUNT "Account",C3_PERIOD "Period",C4_YEAR "Year",C5_SCENARIO "Scenario",C6_VERSION "Version",C7_CURRENCY "Currency",C8_ENTITY "Entity",C9_VERTICAL "Vertical",C10_HORIZONTAL "Horizontal",C11_SALES_HIERARICHY "Sales Hierarchy",C12_DATA "Data" from PLANAPP."C$_0HexaApp_PLData" where      (1=1) """
    srcCx = odiRef.getJDBCConnection("SRC")
    stmt = srcCx.createStatement()
    srcFetchSize=30
    #stmt.setFetchSize(srcFetchSize)
    stmt.setFetchSize(1)
    print "executing query"
    rs = stmt.executeQuery(sql)
    print "done executing query"
    #load the data
    print "loading data"
    stats = pWriter.loadData(rs)
    print "done loading data"
    #close the database result set, connection
    rs.close()
    stmt.close()
    Please help me on this...
    Thanks & Regards,
    Chinnu

    Hi Priya,
    Thanks for giving reply. I already checked that no lock are available for rule file. I don't know what's the problem. It is working fine without the Rule file, but throwing error only when using rule file.
    Please help on this.
    Thanks,
    Chinnu

  • How to see data in hierarchy

    Experts,
    I was searching on the forums, but could not find an appropriate thread on this. I want to see data
    1. In PSA tables or equivalent for the hierachy data after succesful infopackage load. I can see the number of records in the monitor for infopack, but how to see the actual data for that run some where like 'ALE Inbox'.
    2. When I click on 'display' of the hierarchy, I can see the infoobjects under the hierarchy, but how to see the actual data inside those infoobjects?
    Please list the tcodes if possible.

    Hi Latha,
    Whenever you create a hierarchy and activate it, an internal ID called Hierarchy ID will be generated and this information is stored in the table RSHIEDIR. In SE12 go to this table and in place of HIENM give the name of your Hierarchy and IOBJNM as the name of the infoobject for which you created this hierarchy(0costelmnt in this case). With this you will get a HIEID.
    Pass this HIEID to the Hierarchy (H) table of the costelement infoobject(/BI0/HCOSTELMNT) and here you will find all the 9 records that you loaded for your hierarchy. Hope this helps solve your issue.

  • How to create  a datasource for 0COSTCENTER to load data in csv file  in BI

    how to create  a datasource for 0COSTCENTER to load data in csv file  in BI 7.0 system
    can you emil me the picture of the step about how to  loaded individual values of the hierarchy using CSV file
    thank you very much
    my emil is <Removed>
    allen

    Step 1: Load Required Master Data for 0CostCenter in to BI system
    Step 2: Enable Characteristics to support Hierarchy for this 0Cost Center and specify the External Characteristic(In the Lowest Node or Last Node) while creation of this Characteristic InfoObject
    Step 3: On Last Node of Hierarchy Structure in the InfoObject, Right Click and then Create Hierarchy MANUALLY by Inserting the Master Data Value as BI dosent Support the Hierarchy load directly you need to do it manually....
    Step 4: Mapping
    Create Text Node thats the first node (Root Node)
    Insert Characteristic Nodes
    Insert the Last Node of the Hierarchy
    Then you need to create a Open hub Destination for extracting data into the .csv file...
    Step1 : Create the Open Hub Destination give the Master Data table name and enter all the fields required....and create the transformations for this Open Hub connecting to the External file or excel file source...then give the location on to your local disk or path of the server in the first tab and request for the data...It should work alright let me know if you need anything else...
    Thanks,
    Sandhya

  • How to load data from info object to ODS

    Hello BW Gurus,
    Is their any way to load data from info object to ODS and I am unable to fine the Info source for that particulate info object master data.
    For ex: we have 0PROFITCENTER as info object which is getting loaded everyday. We want same data in ODS too but I don't find any info source related to this info object to create update ruled.
    Please advise me how to proceed with this,
    Thanks,
    Swathi.

    hi Swathi,
    as mentioned, if you just need the master data text or attribute update, then it's sufficient to load 0PROFITCENTER master data. (don't forget to 'apply hierarchy/attribute change' - rsa1->tools)
    if you are going to update attribute in ods with attribute from 0PROFITCENTER, you can choose look up master data attribute method in update rules.
    if the requirement is really need 0PROFITCENTER to assign to update rules, then first you have to 'generate export datasource' for infoobject 0PROFITCENTER, right click the infoobject and 'generate export datasource', then do 'replicate datasource' from bw myself, after that it will available. to display it you may go to rsa1->infosource, and menu settings->display generated objects.
    hope this helps.

  • How to load data thru flat file in demantra?

    Hi,
    I am using seeded data model for loading data for DM module in demantra.
    As per data model there is 3 staging tables for item, location and sales_data i.e. t_src_item_tmpl,t_src_loc_tmpl,t_src_sales_tmpl respectively.
    I have seen the final table for sales data i.e. sales_data table. ther is only one item_id and location_id column against which sales_date and quantity is loaded.
    I have same sales_data and quantity to be loaded for different location hierarchy and item hierarchy .
    How the data file is created ? if you have any sample data files , can you please share with me ? If any changes needs to be done in data model for loading datardsa in custom hierarchies ?
    Regards

    Welcome user10979220 to the wonderful forum.
    "How many rows will be there in item flat file ?
    How many rows will be there in location flat file ?
    How many rows will be there in sales data flat file ?"
    Answer to these questions are based on the number of sales that has occured as well as the item hierarchy and the product hierarchy.
    From the example
    Item_name | organization | ship to site | Sales date | quantity
    item 1 org1 site1 19-Jan-2008 80
    item1 org1 site2 20-Jan-2008 100
    I can say that the number of rows in sales data is 2 while that in item file is 1 and location file is 2.
    The answer to how the flat file will look is
    Item file can have
    Item_Id|Item_Name
    01     Item1
    Location file can have
    Org_Id|Org_Name|Site_Id|Site_name
    O_01     Organization 1     S_01     Site 1
    O_01     Organization 1     S_02     Site 2
    Sales file can have
    Item_Id|Item_Name|Org_Id|Org_Name|Site_Id|Site_name|sales_date|Quantity
    01     Item1     O_01     Organization 1     S_01     Site 1     19-Jan-2008 80
    01     Item1     O_01     Organization 1     S_02     Site 2     20-Jan-2008 100
    Once you define the data model and load the data it is advisable not to change the data model since the next time when you load the incremental data it will error out due to the mismatching columns. So its better that you define the data model first and then create the files accordingly.
    Hence after loading data if you want to view sales data at the product_category1 level then you should already have the product_category1 level defined in the data model.Else You have to create the product_category1 level and associate it in the item hierarchy.Also the sales file should have columns for product_category1.
    Then the item file will look like
    Item_Id|Item_Name|Product_category1_Id|Product_category1_desc
    01     Item1     PC_01     Product_category 1
    and the sales file will look like
    Item_Id|Item_Name|Org_Id|Org_Name|Site_Id|Site_name|sales_date|Quantity
    01     Item1     O_01     Organization 1     S_01     Site 1     19-Jan-2008 80
    01     Item1     O_01     Organization 1     S_02     Site 2     20-Jan-2008 100
    This is just a basic overview about loading the data into the staging tables.In short the item table should have the item hierarchy in it, the location file should have the location hierarchy in it and the sales file must contain the details about the sales at the lowest item-location level.
    Hope I answered your query.
    Thanks and Regards,
    Shekhar

  • Loading data from infopackage via application server

    Hi Gurus,
    I have a requirement where i need to load data present in the internal table to a CSV file in the application server (AL11) via open data set, and then read the file from the aplication server, via infopackage ( routine ) then load it to the PSA.
    Now i have created a custom program to load data to AL11 application server and i have used the below code.
    DATA : BEGIN OF XX,
      NODE_ID     TYPE N LENGTH 8,
      INFOOBJECT  TYPE C LENGTH 30,
      NODENAME  TYPE C LENGTH 60,
      PARENT_ID  TYPE N LENGTH 8,
      END OF XX.
    DATA : I_TAB LIKE STANDARD TABLE OF XX.
    DATA: FILE_NAME TYPE RLGRAP-FILENAME.
    FILE_NAME = './SIMMA2.CSV'.
    OPEN DATASET FILE_NAME FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
    XX-NODE_ID = '5'.
    XX-INFOOBJECT = 'ZEMP_H'.
    XX-NODENAME = '5'.
    XX-PARENT_ID  = '1'.
    APPEND XX TO I_TAB.
    XX-NODE_ID = '6'.
    XX-INFOOBJECT = 'ZEMP_H'.
    XX-NODENAME = '6'.
    XX-PARENT_ID  = '1'.
    APPEND XX TO I_TAB.
    LOOP AT I_TAB INTO XX.
      TRANSFER XX TO FILE_NAME.
    ENDLOOP.
    now i can see the data in the application server AL11.
    Then in my infopackage i have the following code,
    form compute_flat_file_filename
         using p_infopackage type rslogdpid
      changing p_filename    like rsldpsel-filename
               p_subrc       like sy-subrc.
          Insert source code to current selection field
    $$ begin of routine - insert your code only below this line        -
      P_FILENAME = './SIMMA2.CSV'.
      DATA : BEGIN OF XX,
      NODE_ID     TYPE N LENGTH 8,
    INFOOBJECT  TYPE C LENGTH 30,
      NODENAME  TYPE C LENGTH 60,
      PARENT_ID  TYPE N LENGTH 8,
      END OF XX.
      DATA : I_TAB LIKE STANDARD TABLE OF XX.
      OPEN DATASET P_FILENAME FOR INPUT IN TEXT MODE ENCODING DEFAULT.
      IF SY-SUBRC = 0.
        DO.
          READ DATASET P_FILENAME INTO XX.
          IF SY-SUBRC <> 0.
            EXIT.
          ELSE.
            APPEND XX TO I_TAB.
          ENDIF.
        ENDDO.
      ENDIF.
    CLOSE DATASET P_FILENAME.
      P_SUBRC = 0.
    i have the following doubt,
    while loading the data from internal table to application server, do i need to add any "data seperator" character and "escape sign" character?
    Also in the infopackage level i will select the "file type" as "CSV file", what characters do i need to give in the "data seperator" and "escape sign" boxes?  Please provide if there is any clear tutorial for the same and can we use process chain to load data for infopackage using file from application server and this is a 3.x datasource where we are loading hierarchy via flat file in the application server.
    Edited by: Raghavendraprasad.N on Sep 6, 2011 4:24 PM

    Hi,
    Correct me if my understanding is wrong.. I think u are trying to load data to the initial ODS and from that ODS the data is going to t2 targets thru PSA(Cube and ODS)....
    I think u are working on 3.x version right now.. make sure the following process in ur PC.
    Start process
    Load to Initia ODS
    Activation of the Initial ODS
    Further Update thru the PSA(which will update both ODS and Cube).
    make sure that u have proper Update rules and Init for both the targets from the Lower ODS and then load the data.
    Thanks

Maybe you are looking for