Hierarchical Data Loading: XSD design for Native data

We are working on native data received in the form of flat file (attaching a few lines below)
FWDREF VXA04_X001_GC
FWDREF VXA04_X010_GC
FWDREF VXA04_X050_GC
FWDREF VXA04_X051_GC
FWDREF VXA04_X075_GC
FWDREF VXA04_X100_GC
FWDREF VXA04_X101_GC
FWDREF VXA04_X150_GC
SECTIDAVXBOSY SHELL AND PANELS
SECTIDAGBBOSY SHELL AND PANELS
SECTIDABKº¾À¿ÃÁ ½° º°À¾ÁµÀ¸Ï° ¸ ¿°½µ»¸
SECTIDACZDKELET KAROSERIE A PANELY
ILLREFBA1 A05_A1_B
ILLREFBA1-1 A05_A1-1_B
ILLREFBA1-2 A05_A1-2_B
FWDREF VXB04_X101_GC
FWDREF VXB04_X150_GC
SECTIDBVXBOSY SHELL AND PANELS
SECTIDBGBBOSY SHELL AND PANELS
SECTIDACZDKELET KAROSERIE A PANELY
ILLREFBA1 B05_A1_B
ILLREFBA1-1 B05_A1-1_B
This data is hierarchical.
-FWDREF
--SECTID
---ILLREF
The challenge is that the number of occurrences of parent and child are not fixed and they might not occur at all
for eg there might be a set of rows like this (in the example below, there is no SECTID)
FWDREF VXB04_X150_GC
ILLREFBA1 B05_A1_B
How can the schema be designed in this case?
Thanks in advance

@rp0428
Thanks for taking out the time to reply to this. If we talk in terms of a tree structure, in the normal scenario we would the hierarchy as described before.
-FWDREF
--SECTID
---ILLREF
If we don't talk in terms of xml and xsd and just talk in terms of database and keys, FWDREF would be parent, SECTID would be child and ILLREF would be grandchild. Now, in case, SECTID does not appear, we would still want a key to be generated but have a default value corresponding to it so that the parent, child and grandchild relationship is mentioned.
The whole purpose of this XSD design is to use it in ODI where this feed file will be automatically loaded into tables from the XML generated with the parent, child and grandchild relationship maintained.
Also, I have taken a sample data set. In the actual case, the hierarchy goes upto a maximum of 20 levels.
Thanks to everyone and anyone who takes out time for this!

Similar Messages

  • ERR:10003 Unexpected data store file exists for new data store

    Our TimesTen application crashes and then it can not connect TimesTen datastore, and then we use ttIsql and get error "10003 Unexpected data store file exists for new data store".So we must rebuild the DataStore.
    I guess the application damages the datastore because we use "direct-linked" mode. Is it true?
    Should I use "Client-Server" mode if our data is very important?
    thx!

    Your question raises several important discussion points:
    It is possible (though very unlikely in practice) for a C or C++ program operating in direct mode to damage the contents of the datastore e.g. by writing through an invalid memory pointer. In the 11+ years that TimesTen has existed as a commercial product we have so far never seen any support case where this was diagnosed as the cause of a problem. However, it is definitely a theoretical possibility and rigorous program testing and use of tools such as Purify is strongly recommended when developing in C or C++ in direct mode. Java programs running in direct mode are completely 'safe' unless they invoke non-Java code via JNI when a similar risk is present.
    The reality is that most customers who use TimesTen in very high performance mission critical applications use mainly direct mode...
    Note also that an application crashing should not cause any damage or corruption to a datastore, even if it is using direct mode, as Times%Ten contains explicit mechanisms to guard against this.
    Your specific problem (error 10003) is nothing to do with the datastore being damaged. This error reflects a discrepancy between the instance main daemon's metedata about all the datastores that it is managing and the reality. This error occurs when the main daemon does not know about a datastore and yet when it comes to connect to (and hence create) the datastore it finds that checkpoint or log files already exist. The main daemon's metadata is managed solely by the main daemon and is completely separate from the datastore and datastore files (the default location is <tt_instance_install_directory>/info, though you can change this at install time). The ususal cause of this is that someone has been manually manipulating files within that directory (which of course you should never do) and has removed or renamed the .DBI file corresponding to the datastore.
    This error should never arise under normal circumstances and certainly not just because some application has crashed.
    Rather than simply switching to the (much slower) client/server mode I think we should try and understand why this error is occurring. Could you please post the following:
    1. Output of ttVersion command
    and then we can take it from there.
    Thanks, Chris

  • Best design for historical data

    I'm searching a way to design some historical data.
    Here my case:
    I have millions of rows defining a population, each on 20-30 different tables.
    Those rows are related to one master subject area, let say it is a person.
    The details table are defining status in a period of time (marital status, address, gender (yes that may change in some cases!, program, phase, status, support, healthcare, etc). They have all two attributes that define the time period (dateFrom, dateTo), and one or many attributes that define the status of the person (the measure).
    I know we need a weekly situation for this population since 1998 to yet.
    Problems are those:
    1) the population we will analyze will count some 20 different criteria's (measures), may be we will divide those in some different datamarts to avoid to much complexity
    2) we will drill down from year to week, and will accomplish comparison like year to year, year to previous year, year to date, etc at each level of time hierarchy (year, quarter, month, week).
    The question is :
    1) do we need to transform our data at a week level for each person to determine the status at this level (the data may be updated and will cause this transformation to be refreshed) ?
    1) do we need to aggregate for each level because mixed situation exists due to fact that more than one status may exists for same person at a given time period (more than one status may exists in march by example, how to interpret it ?), that will cause to some logic to be applied, isn't ?
    We will be glad to hear some recommendation/idea about this !
    Thank You

    I would try to get some exact user requirements, with the dataset you have described there are millions of combinations of answers that can be defined and it will be difficult for you to fulfill them all in one place, especially if there are slowly changing dimensions involved.
    The user requirements will determine what levels you transform the data at.

  • BPC NW 7.0: Data Load: rejected entries for ENTITY member

    Hi,
    when trying to load data from a BW info provider into BPC (using UJD_TEST_PACKAGE & process chain scheduling), a number of records is being rejected due to missing member entries for the ENTITY dimension in the application.
    However, the ENTITY member actually do exist in the application. Also, the dimension is processed with no errors. The dimension member are also visible usnig the Excel Client naviagtion pane for selecting members.
    The error also appears when kicking of the data load from the Excel Client for BPC. Any ideas how to analyze this further or resolve this?
    Thanks,
    Claudia Elsner

    Jeffrey,
    this question is closely related to the issue, because there is also a short dump when trying to load the data into BPC. I am not sure whether both problems are directly related though:
    Short dump with UJD_TEST_PACKAGE
    Problem desription of the post:
    When running UJD_TEST_PACKAGE, I get a short dump.
    TSV_TNEW_PAGE_ALLOC_FAILED
    No more storage space available for extending an internal table.
    Other keywords are CL_SHM_AREA and ATTACHUPDATE70.
    When I looked at NOTES, I found this Note 928044 - BI lock server". Looking at the note and debugging UJD_TEST_PACKAGE leaves me some questions:
    1. Do I need a BI lock server?
    2. Should I change the enque/table_size setting be increased on the central instance from 10000 to 25000 or larger?
    Claudia

  • Data Not coming Properly for 0HR_PA_OS_1 Data Source?

    Hi Experts,
    I am trying to load data to 0PAOS_C01 using 0HR_PA_OS_1 data source. For field 0POS_OCCVAC key figure i am not getting any data from Data Source.
    I checked in RSA3 for 0HR_PA_OS_1 data source and  in that it is not populating data for filed OCC_VAC_PERC  field. I checked with HR functional team and they mentioned its a BI issue.
    Kindly tell me how can i populate values for the particular field. FYI i am getting correct data for other fields.
    Please help me.
    Thanks

    The possible reasons
    1.Your user id might not be having the proper authorization to pull the HR data.This you can confirm with your basis guy,if you are unable to find what are the authorization roles required for HR data, for the time being you can take SAP_ALL authorization in development box and try to pull.
                            OR
    2.Try to pull from BW side,you may be able to see the data there because this uses the background user id used in RFC connection ( provided if your background user id having proper authorization ).

  • Data load fron flat file through data synchronization

    Hi,
    Can anyone please help me out with a problem. I am doing a data load in my planning application through a flat file and using data synhronization for it. How can i specify during my data synchronization mapping to add values from the load file that are at the same intersection instead of overriding it.
    For e:g the load files have the following data
    Entity Period Year Version Scenario Account Value
    HO_ATR Jan FY09 Current CurrentScenario PAT 1000
    HO_ATR Jan FY09 Current CurrentScenario PAT 2000
    the value at the intersection HO_ATR->Jan->FY09->Current->CurrentScenario->PAT should be 3000.
    Is there any possibility? I dont want to give users rights to Admin Console for loading data.

    Hi Manmit,
    First let us know if you are in BW 3.5 or 7.0
    In either of the cases, just try including the fields X,Y,Date, Qty etc in the datasource with their respective length specifications.
    While loading the data using Infopackage, just make the setting for file format as Fixed length in your infopackage
    This will populate the values to the respective fields.
    Prathish

  • BPC - Consolidation - Data Loading - Will BS/PL accounts data ONLY be loaded from ECC?

    Dear All,
    In BPC, when we load data from ECC for Consolidation, my understanding is that we load only BS and PL accounts' data for/by the entity.
    Apart from BS and PL data, will there be any data that will have to be loaded into BPC for consolidation?
    The following three financial statements -
    -Statement of Cash Flow
    -Statement of Changes in Equity
    -Statement of Comprehensive Income
    are actually derived/calculated from the loaded BS and PL data. This is my understanding. Pls. correct me if I am wrong.
    Thank you!
    Regards,
    Peri

    Hi Peri,
    Balance sheet, PL and those three financial statements are derived from BS/ PL accounts, however, there should also be "flow" information.  Otherwise you won't end up with a correct consolidated cash flow or equity movement. ( or you can prefer to enter flow detail manually)
    Second thing is, while getting BS & PL accounts, you will also need trading partner detail, otherwise you won't be able to do the eliminations. (or you can prefer to manually enter trading partner detail for intercompany accounts)
    Thirdly, you should also consider other disclosures. (Depending on what standart you are implementing - IFRS, US GAAP, Local Gaap whatever...)
    Hope this gives an idea.
    Mehmet.

  • Data Load Error  due to Master data deletion

    Hi,
    While doing the transactional data load I am getting following error.
    +Master data/text of characteristic ZFOCUSGRP already deleted Message no RSDMD138   +
    ZFOCUSGRP  is  an  InfoObject (with Text). Last week we changed the source system from CRM to R/3 during that time we deleted all the Texts in ZFOCUSGRP manually from the table.
    This error is not happening always some time it load properly. I executed the RSRV for  InfoObject ZFOCUSGRP and InfoCube still this error happening.
    Is there any way to fix this error?
    Thanks in advance.
    Thanks
    Vinod

    check this:
    Re: Error while running InfoPackage
    Master data/text of characteristic 0MATERIAL already deleted
    Master data/text of characteristic ZXVY already deleted
    Hope it helps..

  • Native data warehouse products  vs  non-native data warehouse products

    Hi Experts!
    Can any one help me on this topic if you have any idea on <b>native data warehouse products idea</b>.
    2)discuss the <b>system’s ability to interface with a non-native data-warehouse</b>.
    <b>Discuss the architecture in both cases</b>.
    Describe or illustrate <b>how data in the data warehouse can be utilized for reporting with the data in the ERP system.</b>
    Your help will be Appreciated.
    Advance thanks,
    vikram.c

    Hi Experts!
    Can any one help me on this topic if you have any idea on <b>native data warehouse products idea</b>.
    2)discuss the <b>system’s ability to interface with a non-native data-warehouse</b>.
    <b>Discuss the architecture in both cases</b>.
    Describe or illustrate <b>how data in the data warehouse can be utilized for reporting with the data in the ERP system.</b>
    Your help will be Appreciated.
    Advance thanks,
    vikram.c

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Data load in BI7 for an single infoobject

    Hi
      I had added a new info object (PO number) in ODS and mapped to the corresponding R/3 field (BSTNK). Now i want to load data only to the newly added field (PO number) in BW from R/3 with out deleting the existing data. I came to know it is possible in BI7 can someone help me with the steps please.
    Thanks a lot
    Sheetal

    <u>it is possible even earlier versions as well provided that no key figure mapped to Addition Update Method</u>.
    If any KF has Update Method "addition"...we have these 2 options ... even for BI 7.0.
    1. you need to drop the Data and reload entire data.
    2. create a custom extractor to load only this field.
    If no KF set to Addition, Just reload entire data with existing DS.
    Remodeling workbench in BI 7.0, we can't work with DSO. it's not supported for current Release we have. Planed for future release... don't have idea whether we can achive this or not!
    Nagesh Ganisetti.
    Assign points if it helps.

  • Second BW data load job waits for first to complete.

    Hi all,
    Iam manually loading data into my BW system , i have observed that if i try to load two data source together the system wait fror one to complete then only it will trigger the secnond load. eg if i run 2lis_11_vahdr and 2lis_13_Vdhdr together.
    The RSMo shows data records for 11_vahdr but for 13_vdhdr it is zero untill the the 11_vahdr finishes.
    I have 8 background processes are available both in Bw and R/3
    Please help.
    Regards,

    Hi,
    the case may not be like that...if one finishes then only the other one starts....
    When u start those 2 loads observe the background process in R/3 which starts with job name:BI_REQ*
    As per my guesses bcoz of ur 8 background processes when u start 2 loads at a time there will be 2 jobs trigerring in R/3 with job name given above....
    One job may be going into RELEASED state in R/3 and waits for the first  job to finish and after that finishes the previous one comes into ACTIVE state and start generating idoc's...
    by this we can say that this may be due to resorces avialability in R/3...
    u closely mnitor the jobs in R/3....
    rgds,
    nkr.

  • InfoCube Design for Variable data - Use of Line Item Dimensions

    I have an infoprovider based on billing conditions which we have extended the extractor structure for 2LIS13_VDKON and we now have a requirement to add Customer fields such as Customer Purchase Order Number and Contract Number.  These fields are obviously highly variable.  I have added to them to the reporting DSO and now need advice on what is the best way to add these types of fields as reportable dimensions to the infocube so as to not impact performance?    I currently have 9 dimensions with multiple charachteristics and a time dimension.  Should I just create a line item dimension for Purchase Order?  Problem is I have 8 other line item dimensions to add which are customer specific reporting fields that we capture on the sales order and wish to report on.  I know there is a limit of 16 dimensions and I am also concerned about performance.
    Any advice is greatly appreciated
    Lee Lewis

    Hi,
    To make sure that the infocube you have created should not have any performance issue: Please do the following
    Go to RSRV > All elementary tests-> Database---> database information about infoprovider tables
    Upon clicking on database information about infoprovider tables, on the right hand side, in the parameter enter your infocube name and execute and see the log: ( log will automatically popping up once you are done with execution )
    There see the database infomation about infoprovider:
    This log wil make you to understand how well you have designed your infocube and make sure that each (f ) table corresponding to each dimension will not exceed 20 % of the infocube size.
    You create dimensions of infocubes  in a such a way ( whether line item or normal dimension ) so that any of the dimensional F table will not exceed 20 % of the infocube size.
    Actually this will give us the information of the size of the data of particular dimension and there by if any particular dimension is exceeding the 20 % of the infocube size, then you need to create line item dimension for the characteristics existing in that dimension .
    After creating again, test it and see whether any of the dimension table exceeding 20% infocube size .
    Repeat this process until you see all dim F tables less than 20 % of the info cube table size
    This will negate any performance issues arise in reporting
    Edited by: S Simran on Nov 6, 2009 10:11 AM

  • XSD validation for incoming data into BPEL process

    Please suggest how to validate XSD incoming data into BPEL process.
    I just wanted to verify the data before entering into BPEL

    Hi,
    I guess i am replying very late.
    In BPEL 2.0 we have an activity called "Validate" which can do the XSD validations.
    "Lets Learn Oracle SOA: Validate XML schema In BPEL"
    Regards,
    Chinmaya

Maybe you are looking for

  • A message comes up saying that another user has to log out

    ok i have windows XP and my mom is the administraitor. and so i installed iTunes under her name because it wouldnt let me on mine. so it was okay installing but when ever i log into my name and my moms name it says that another user on the computer i

  • Set "optimize for screen display" at runtime?

    Hi, is it possible to set the option "optimize for screen display" at runtime when using the CR2008/.NET 2.0 runtime? Best regards, Florian

  • Warranty still active...can i exchange in store??

    ok... i just checked my warranty and its still active... can i just go into the store and just exchange the battery with my warranty like that or do i have to call them and exchange it??

  • Is Custom Style Sheet Working?

    I'm trying to use a custom style sheet, in order to block ads, but I don't think it works in this version of Safari. (on OS X 10.4, I use a custom CSS file and it works great.) In the Windows beta, it does nothing. I even tried a simple custom style

  • How to show "Trash" folder in folders page.

    Since ICS update, changes to the email screens were made. When you want to go to the Trash folder you now have to hit another tab to show all folders and then choose trash to delete your messages. Well I don't know how I did it but in one of my accou