Data Load Overwrite

HI,
How does Essabse treats data during incremental load?
E.g. I am loading January data for some products. Now I must do it again but some data for the produts are #missing. Will essbase overwrite the old data to missing or will it disregard because these data are not present?
Thank you
lubos

Well,
I know all of these. I just want to know that if i select Overwrite, How does my data at some intersection change if these data are not yet present.
E.g. I have 100 USD for January, 2009, Product A. ..... etc. Now this data are completely missing in my relational DB. Will essbase leave the 100 USD or will it remove with missing?
Or otherwise, Will essbase execute DELETE (clear) and then insert for specified intersections or Update?

Similar Messages

  • Master data load overwrite / add ?

    Hello BW Experts,
    Where: loading master data using infopackage.
    1) Do we have the option of overwriting / adding to the master data ?
    2) What do these options mean.
    3) Where to make these settings in BW in the infopackage.
    Please suggest.
    Thanks,
    BWer

    Hi,
    Master data is overwritten in BW.
    If you are loading MD directly then you can't have the option of adding , it'll be overwritten.
    Adding is usually used in case of transaction data, where it'll add to the existing values for the same key. In overwrite, it'll overwrite the record completely with the new values.
    You have to make these settings at the object level, not at the info package level.
    Cheers,
    Kedar

  • Data load into Essbase (append instead of overwrite)

    Hello,
    We are loading data from oracle table to target Essbase cube. How do we handle ODI data load to append instead of overwrite last value?
    Ex: We have data source with M:1 mapping, so we incoporated case statement [Case when Group A, B, C then D] is there a setting in ODI that allows data to append (add) instead of overwriting?
    Currently, the data value in C is loaded into D instead of A+B+C into D.
    Thanks.

    You can put the CASE WHEN in the target mapping and still use a load rule, a load rule does not have anything to do with what you do in the target mappings.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Data load to DSO

    Hi guys...
    Suppose I have two Datasources that are mapped to a infosource and this infosource is mapped to one dso(all objects until DSO are emulated from 3.x to 7.x)...when I load data,I assume that I have to use two infopackages and I get data into DSO in two requests.I have few questions about this,assuming I have only these two requests in my DSO:
    1.When I tried to create a query directly on DSO in query designer... I couldnot find the infoobject 0REQUESTID in query designer...then how can I do if I want to see data request by request rather than all together?
    2.Suppose the DSO gets data like below:
    Fields in DSO:X1,X2,Y1,Y2,Y3    [X1,X2 are characteristics and also keys,Y1,Y2,Y3 are keyfigures]
    Data feeded by Datasource 1 :   X1  X2  Y1
                                                         a     b     10
    Data feeded by Datasource 2 :   X1   X2   Y2   Y3
                                                         a      b     20    30
    so when I load data,I will load data in two requests and these are the only two requests I have in my DSO....then how will data look in DSO.....does it gets stored in two seperate rows or single row?how is it shown in a query result?
    If the keys are not matched,how will the data be shown for keyfigures that are not loaded by that request?
    3.I know that in DSO,We have two options:Overwrite/Addition....how will be the data loading be in following situation:
    Datasource 1 feeds like this in Request 1:
    X1 X2  Y1
    a   b     10
    Datasource 2 feeds like this in Request 2:
    X1  X2  Y1  Y2 Y3
    a    b      30  40   50
    how will the result be shown in our two options Addition and Overwrite?will request 2 overwrite or add up data in Y1?
    Thanks.

    Hi guys...
    Suppose I have two Datasources that are mapped to a infosource and this infosource is mapped to one dso(all objects until DSO are emulated from 3.x to 7.x)...when I load data,I assume that I have to use two infopackages and I get data into DSO in two requests.I have few questions about this,assuming I have only these two requests in my DSO:
    1.When I tried to create a query directly on DSO in query designer... I couldnot find the infoobject 0REQUESTID in query designer...then how can I do if I want to see data request by request rather than all together?
    Request-ID is only a part of the new data table - after activation of your data your request will get lost. If you want to see whats happening, load you data request by request and activate your data after each request
    2.Suppose the DSO gets data like below:
    Fields in DSO:X1,X2,Y1,Y2,Y3 X1,X2 are characteristics and also keys,Y1,Y2,Y3 are keyfigures
    Data feeded by Datasource 1 : X1 X2 Y1
    a b 10
    Data feeded by Datasource 2 : X1 X2 Y2 Y3
    a b 20 30
    so when I load data,I will load data in two requests and these are the only two requests I have in my DSO....then how will data look in DSO.....does it gets stored in two seperate rows or single row?how is it shown in a query result?
    If the keys are equal, you will have only one dataset in your DSO
    If the keys are not matched,how will the data be shown for keyfigures that are not loaded by that request?
    Then you will have two datasets in your DSO
    3.I know that in DSO,We have two options:Overwrite/Addition....how will be the data loading be in following situation:
    Datasource 1 feeds like this in Request 1:
    X1 X2 Y1
    a b 10
    Datasource 2 feeds like this in Request 2:
    X1 X2 Y1 Y2 Y3
    a b 30 40 50
    how will the result be shown in our two options Addition and Overwrite?will request 2 overwrite or add up data in Y1?
    If you choose overwrite, you will get 30 - if you choose addition, you will get 40
    Thanks.

  • Data load for target after target enhancement

    Dear all,
    We are using BI7.00 and in one of our design we have the data first loaded to the ODS and then to the Cube. Now we wanted to get the value of one more field. This field is already avaialable in the data source and data is flowing upto PSA. I have added that field in the ODS. My problem starts here. I want the data to flow for all the previous requests i.e., earlier requests in the ODS and the Cube (Prior to enhancement) without disturbing the data load.
    Kindly provide the step by step instructions. I do not want the earlier data to be deleted. The current field value to be updated to the previous loads to the ODS and the Cube.
    Regards,
    M.M

    Hi,
    Overwrite optrion is available in the update mode of key figures.
    Its key figure which determines whether the DSO is in overwrite or addition mode.
    Go to the mappings of each infoobject in the tranformation from the data source to DSO and check the mapping type of each key figure and then see if its overwrite or not??
    To avoide reloading to the cube.....One of the option is to add the fields in both cubes and DSO and then activate the transformation and mappings,DTP ect.
    After that load the data to the DSO and then schedule the delta from the DSO to the cube.Thi will bring all the chnages to the DSO in the cube.
    But this delta may be a huge one and may fail depending upon the data in the DSO.
    But you can try for this and if it is not working then you may have to delete the cube and reload it.
    Thanks
    Ajeet

  • Problem with Master Data Load

    Dear Experts,
    If somebody can help me by the following case, please give me some solution. Iu2019m working in a project BI 7.0 were needed to delete master data for an InfoObject material. The way that I took for this was through tcode u201CS14u201D. After that, I have tried to load again the master data, but the process was broken and the load done to half data.
    This it is the error:
    Second attempt to write record 'YY99993' to /BIC/PYYYY00006 failed
    Message no. RSDMD218
    Diagnosis
    During the master data update, the master data tables are read to determine which records of the data package that was passed have to be inserted, updated, or modified. Some records are inserted in the master data table by a concurrently running request between reading the tables at the start of package processing and the actual record insertion at the end of package processing.
    The master data update tries to overwrite the records inserted by the concurrently running process, but the database record modification returns an unexpected error.
    Procedure
    u2022     Check if the values of the master data record with the key specified in this message are updated correctly.
    u2022     Run the RSRV master data test "Time Overlaps of Load Requests" and enter the current request to analyze which requests are running concurrently and may have affected the master data update process.
    u2022     Re-schedule the master data load process to avoid such situations in future.
    u2022     Read SAP note 668466 to get more information about master data update scheduling.
    Other hand, the SID table in the master data product is empty.
    Thanks for you well!
    Luis

    Dear Daya,
    Thank for your help, but I was applied your suggesting. I sent to OSS with the following details:
    We are on BI 7.0 (system ID DXX)
    While loading Master Data for infoobject XXXX00001 (main characteristic in our system u2013 like material) we are facing the following error:
    Yellow warning u201CSecond attempt to write record u20182.347.263u2019 to BIC/ XXXX00001 was successfulu201D
    We are loading the Master data from data source ZD_BW_XXXXXXX (from APO system) through the DTP ZD_BW_XXXXX / XXX130 -> XXXX00001
    The Master Data tables (S, P, X) are not updated properly.
    The following reparing actions have been taken so far:
    1.     Delete all related transactional and master data, by checking all relation (tcode SLG1 à RSDMD, MD_DEL)
    2.     Follow instructions from OSS 632931 (tcode RSRV)
    3.     Run report RSDMD_CHECKPRG_ALL from tcode SE38 (using both check and repair options).
    After deleting all data, the previous tests were ok, but once we load new master data, the same problem appears again, and the report RSDMD_CHECKPRG_ALL gives the following error.
    u201CCharacteristic XXXX00001: error fund during this test.u201D
    The RSRV check for u201CCompare sizes of P and X and/or Q and Y tables for characteristic XXXX00001u201D is shown below:
    Characteristic XXXX00001: Table /BIC/ PXXXX00001, /BIC/ XXXXX00001 are not consistent 351.196 derivation.
    It seems that our problem is described in OSS 1143433 (SP13), even if we already are in SP16.
    Could somebody please help us, and let us know how to solve the problem?
    Thank for all,
    Luis

  • Data Load behaviour in Essbase

    Hello all-
    I am loading data from Flat File using a server Rule File. In the rule file i have properties for a feild where in it replaces a name in flat file for member name in outline so it is somwhat like this:
    Replace With
    Canada 00-200-SE
    Belgium 00-300- SE
    and so on
    Now in my flat file there was a new member for example china & the replacement for it was not present in Rule File & when the data was loaded in the system it didnt rejected that record on the contrary it loaded the values for china in
    the region which was above it and overwrited the values for the original one.
    Is this the normal behavior of essbase , I was thinking that record should have been rejected .
    I know when we do a Lock & Send via Addin & if member is not present in outline it give you warning when you lock that sheet & eventually if you dont delete that member from the template it will load data against it in the member above it.
    Is there a waok around for this problem or this is what it is ?
    I am on Hyperion Planning / Essbase Version 9.3.1.
    Thanks

    Still thinking how does these properties effects the way data is being loaded right now. Have gone through DBAG & i dont see a reason y any of these peoperties might be affecting the load^^^Here's what I think is happening: China is not getting mapped, but the replacement for Belgium is occuring and resolves to a valid member name. Essbase sees China and doesn't recognize it (you knew all of this already).
    When the load occurs, Essbase says (okay, I am anthromorphizing, but you get the ida) "Eh, I have no idea what China is, but 00-300-SE is the last good Country member I have, I will load there." Essbase is picking the last valid member and loading to that. I liken it to a lock and send from Excel with nested dimensions and non-repeating members. Essbase "looks up" a row, finds the valid member, and loads there.
    And yes, this is in the DBAG: http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_dbag/ddlload.htm#ddlload1034271
    Search for "Unknown Member Fields" -- it's all the way at the bottom of the above link.
    In fact, to save you the trip, per the DBAG:
    If you are performing a data load and Essbase encounters an unknown member name, Essbase rejects the entire record. If there is a prior record with a member name for the missing member field, Essbase continues to the next record. If there is no prior record, the data load stops. Regards,
    Cameron Lackpour

  • Data loader : Import -- creating duplicate records ?

    Hi all,
    does anyone have also encountered the behaviour with Oracle Data Loader that duplicate records are created (also if i set the option: duplicatecheckoption=externalid) When i am checking the "import request queue - view" the request parameters of the job looks fine! ->
    Duplicate Checking Method == External Unique ID
    Action Taken if Duplicate Found == Overwrite Existing Records
    but data loader have created new records where the "External Unique ID" is already existent..
    Very strange is that when i create the import manually (by using Import Wizard) exactly the same import does work correct! Here the duplicate checking method works correct and the record is updated....
    I know the data loader has 2 methods, one for update and the other for import, however i do not expect that the import creates duplicates if the record is already existing, rather doing nothing!
    Anyone else experiencing the same ?? I hope that this is not expected behaviour!! - by the way method - "Update" works fine.
    thanks in advance, Juergen
    Edited by: 791265 on 27.08.2010 07:25
    Edited by: 791265 on 27.08.2010 07:26

    Sorry to hear about your duplicate records, Juergen. Hopefully you performed a small test load first, before a full load, which is a best practice for data import that we recommend in our documentation and courses.
    Sorry also to inform you that this is expected behavior --- Data Loader does not check for duplicates when inserting (aka importing). It only checks for duplicates when updating (aka overwriting). This is extensively documented in the Data Loader User Guide, the Data Loader FAQ, and in the Data Import Options Overview document.
    You should review all documentation on Oracle Data Loader On Demand before using it.
    These resources (and a recommended learning path for Data Loader) can all be found on the Data Import Resources page of the Training and Support Center. At the top right of the CRM On Demand application, click Training and Support, and search for "*data import resources*". This should bring you to the page.
    Pete

  • Announcing 3 new Data Loader resources

    There are three new Data Loader resources available to customers and partners.
    •     Command Line Basics for Oracle Data Loader On Demand (for Windows) - This two-page guide (PDF) shows command line functions specifc to Data Loader.
    •     Writing a Properties File to Import Accounts - This 6-minute Webinar shows you how to write a properties file to import accounts using the Data Loader client. You'll also learn how to use the properties file to store parameters, and to use the command line to reference the properties file, thereby creating a reusable library of files to import or overwrite numerous record types.
    •     Writing a Batch File to Schedule a Contact Import - This 7-minute Webinar shows you how to write a batch file to schedule a contact import using the Data Loader client. You'll also learn how to reference the properties file.
    You can find these on the Data Import Resources page, on the Training and Support Center.
    •     Click the Learn More tab> Popular Resources> What's New> Data Import Resources
    or
    •     Simply search for "data import resources".
    You can also find the Data Import Resources page on My Oracle Support (ID 1085694.1).

    Unfortunately, I don't believe that approach will work.
    We use a similar mechanism for some loads (using the bulk loader instead of web services) for the objects that have a large qty of daily records).
    There is a technique (though messy) that works fine. Since Oracle does not allow the "queueing up" of objects of the same type (you have to wait for "account" to finish before you load the next "account" file), you can monitor the .LOG file to get the SBL 0363 error (which means you can submit another file yet (typically meaning one already exists).
    By monitoring for this error code in the log, you can sleep your process, then try again in a preset amount of time.
    We use this allow for an UPDATE, followed by an INSERT on the account... and then a similar technique so "dependent" objects have to wait for the prime object to finish processing.
    PS... Normal windows .BAT scripts aren't sophisticated enough to handle this. I would recommend either Windows POWERSHELL or C/Korn/Borne shell scripts in Unix.
    I hope that helps some.

  • TileList data load issue

    I am having an issue where the data that drives a tilelist
    works correctly when the tile list is not loaded on the first page
    of the application. When it is put on a second page in a viewstack
    then the tilelist displays correctly when you navigate to it. When
    the tilelist is placed in the first page of the application I get
    the correct number of items to display in the tilelist but the
    information the item renderer is supposed to display, ie a picture,
    caption and title, does not. The strange thing is that a Tree
    populates correctly given the same situation. Here is the sequence
    of events:
    // get tree is that data for the tree and get groups is the
    data for the tilelist
    creationComplete="get_tree.send();get_groups.send();"
    <mx:HTTPService showBusyCursor="true" id="get_groups"
    url="[some xml doc]" resultFormat="e4x"/>
    <mx:XMLListCollection id="myXMlist"
    source="{get_groups.lastResult.groups}"/>
    <mx:HTTPService showBusyCursor="true" id="get_tree"
    url="[some xml doc]" resultFormat="e4x" />
    <mx:XMLListCollection id="myTreeXMlist"
    source="{get_tree.lastResult.groups}"/>
    And then the data provider of the tilelist and tree are set
    accordingly. I tried putting moving the data calls from the
    creation complete to the initialize event thinking that it would
    hit earlier in the process and be done by the time the final
    completion came about but that didn't help either. I guess I'm just
    at a loss as to why the tree works fine no matter where I put it
    but the TileList does not. It's almost like the tree and the
    tilelist will sit and wait for the data but the item renderer in
    the tilelist will not wait. Which would explain why clicking on the
    tile list still produces the correct sequence of events but the
    visual component of the tilelist is just not working right. Anyone
    have any ideas?

    Ok, so if ASO value is wrong, then its a data load issue and no point messing around with the BSO app. You are loading two transactions to the exact same intersection. Make sure your data load is set to aggregate values and not overwrite.

  • Infopackage:Determine the variables for periodic data loading

    Hi,
    we have an infopackage for full update and we have set in "Data selection" tab as selection , an OLAP variable.
    If I do the load using the  OLAP variable selection, 12 last months , I need to delete data can be duplicated. But for example, If I do mensually load ,  I can't delete the request has been load just before , because I delete one month that can't be deleted. I have looking for a possible solution , but I don't how I can solve this.
    Please, can anybody help me?
    Carolina

    Carolina.
       I think your existing data flow is : Source --> Cube.
    Now change this one to : Source --> ODS(load monthly last 12 months) --> Delta(Cube).
    No need to delete any data selectively if you load data in overwrite mode. ODS will generate change log if any changes occur or creates new records for new records. Just push delta from this ODS to CUBE monthly.
    Srini

  • Demantra Data Load Issue

    I am new to Demantra. Have installed a stand alone Demantra system in our server. In order to load data, I created a new model, defined item and location levels, then clicked on 'Build Model'. The data is loaded into 3 custom tables created by me. After creating the model, I cannot login to 'Collaborator Workbench', it gives message 'There are system errors. Please contact your System Administrator'. Can anyone please tell me what I am doing wrong and how to resolve the issue.
    Thanks

    Ok, so if ASO value is wrong, then its a data load issue and no point messing around with the BSO app. You are loading two transactions to the exact same intersection. Make sure your data load is set to aggregate values and not overwrite.

  • Master data load failed due to duplicate records .

    hello friends ,
    need some help .
    I am loading the master data from soruce sys , and it is throwing error of duplicate 56 records.
    I repeated the step , but found the same error once again .
    i could not find out the duplicate record , as thr are more than 24000 records , and in this 56 are duplicate . and this duplicate also looks like same.
    when i click on error records , it is showing me the below procedure .
    maintain the attribute in Psa SCREEN .
    I could not find the duplicate records , could you please let me know how can i maintain this .
    Regards

    Hi ,
    Reload the masterdata by cheking ignoreduplicate records check box.since the master data has overwriting capability the duplicate records will be overwritten
    cheers,
    Swapna.G

  • Query in data Loading.

    Hi All,
    I  have a query on data loading.
    Right now the delta is running in BI Production.Suppose some records are missing in ODS,then i can go for select conditions in Info package and selecting the repair Full request.
    Query No 1 : Instead of running full repair request,Will system allow to run Full Load for loading in ODS.
    Query No 2 : Instead of running full repair request,Will system allow to run Full Load for loading in Cube.
    Query No 3 : Can i run delta load, once the Full load is done in Cube and ODS.
    Thanks,
    Jelina

    Hi there,
    The full repair is nothing more than a full load, with the difference that it doesn't mess up with your current deltas, i.e., you can run a full repair without spoiling your deltas.
    So instead of a full load, why not the full repair since it is still a full load extraction?
    Also keep in mind that a full repair over the ODS, if the ODS is set to overwrite, you have no problem bringing data that already exists in the ODS, it will simple overwrite the same data, now for the cube is different. If you bring a full it will add the data that already exist, so in that case you will have to selective delete from the cube the data you will bring once more with the full repair so you won't have any duplicates.
    In all these cases the full repair works without any problem. And after the full repairs the deltas work without any issue.
    Diogo.

  • Aggregating data loaded into different hierarchy levels

    I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
    I read the help in DML Reference of the OLAP Worksheet and it said the follow:
    When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
    LIMIT all_but_q4 TO ALL
    LIMIT all_but_q4 REMOVE 'Q4'
    Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
    How to do it this for more than one dimension?
    Above i wrote my case of study:
    DEFINE T_TIME DIMENSION TEXT
    T_TIME
    200401
    200402
    200403
    200404
    200405
    200406
    200407
    200408
    200409
    200410
    200411
    2004
    200412
    200501
    200502
    200503
    200504
    200505
    200506
    200507
    200508
    200509
    200510
    200511
    2005
    200512
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    2004 NA
    200412 2004
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    2005     NA
    200512 2005
    DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
    EQ -
    aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
    COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 4,00 ---> here its right!! but...
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00 ---> here must be 30,00 not 10,00
    200512 NA
    DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
    T_TIME PRUEBA2_IMPORTE_STORED
    200401 NA
    200402 NA
    200403 NA
    200404 NA
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    2004 NA
    200412 NA
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    2005 10,00
    200512 NA
    DEFINE OBJ262568349 AGGMAP
    AGGMAP
    RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
    args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
    AGGINDEX NO
    CACHE NONE
    END
    DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
    T_TIME_AGGRHIER_VSET1 = (H_TIME)
    DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
    T_TIME_AGGRDIM_VSET1 = (2005)
    Regards,
    Mel.

    Mel,
    There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
    1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
    = solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
    2. Data is loaded at both a detail level and it's ancestor, as in your example case.
    = the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
    Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
    To solve your usage case I would suggest a hierarchy that looks more like this:
    DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
    -----------T_TIME_HIERLIST-------------
    T_TIME H_TIME
    200401 2004
    200402 2004
    200403 2004
    200404 2004
    200405 2004
    200406 2004
    200407 2004
    200408 2004
    200409 2004
    200410 2004
    200411 2004
    200412 2004
    2004_SELF 2004
    2004 NA
    200501 2005
    200502 2005
    200503 2005
    200504 2005
    200505 2005
    200506 2005
    200507 2005
    200508 2005
    200509 2005
    200510 2005
    200511 2005
    200512 2005
    2005_SELF 2005
    2005 NA
    Resulting in the following cube:
    T_TIME PRUEBA2_IMPORTE
    200401 NA
    200402 NA
    200403 2,00
    200404 2,00
    200405 NA
    200406 NA
    200407 NA
    200408 NA
    200409 NA
    200410 NA
    200411 NA
    200412 NA
    2004_SELF NA
    2004 4,00
    200501 5,00
    200502 15,00
    200503 NA
    200504 NA
    200505 NA
    200506 NA
    200507 NA
    200508 NA
    200509 NA
    200510 NA
    200511 NA
    200512 NA
    2005_SELF 10,00
    2005 30,00
    3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
    = this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
    4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
    = often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation.

Maybe you are looking for

  • Can i set up a third computer that isn't wireless

    I currently have 2 wireless laptops set upon my HP Photpsmart eAll in one C360 printer. Can I connect via usb a non wireless desktop computer to the C360, and not mess up the wireless connection for the other 2 computers? Thanks, George Blandford Thi

  • Can PowerBook G4 with Mac OS 10.4.11 enable printing with iPhone iOS 6.0

    I have a PowerBook G4 running Mac OS 10.4.11 (PPC). I have 3 printers on a network linked either as USB printers or as WiFi peripherals: HP Color LaserJet 2840, HP DeskJet 940c, Samsung ML-1210. I would like to enable AirPrint for iPad, iPhone 6.0 in

  • NM7.1: Regarding Software Package Silent Installation Issue

    Dear Experts, I am working on NM7.1, & I tried of creating a Software Package which will include MI client installation, JRE, MaxDB, MCD all in one. For this I zipped MaxDB folder which I downloaded from SMP & uploaded it to NWA, created a software p

  • Invalid request - request handler "Index" not found

    Have deployed JatoSample.war on iPlanet web server. When I tried to access the sample app at http://myserver/JatoSample/samples.html it is throwing following error: [19/May/2003:10:29:31] warning (19890): vs(https-abeesam2)Application Error [19/May/2

  • How do I the interpret "Disk file operations I/O" wait event?

    I have a large and very busy batch database. All of a sudden the "Disk file operations I/O" wait event is in the top 5 in AWR. The manual page isn't very helpful: http://download.oracle.com/docs/cd/E11882_01/server.112/e17110/waitevents003.htm#insert