DATA LOAD WORNINGS IN ASO CUBES

Hi Every one,
While loading data into ASO cubes in Essbae we are getting wornings like *"Data load strem contains 1.25797e 08 and [0] #misssing cells ".*My data file have #missing values and o's and sepecial carecters like E .I want To load the complete data with out warnings.Kindly let me know if any one's know the solution .Whether i need to change any settings in rule file or how to ingnore those cells .
Thanks,
Vikram

The warnings are really informational messages to let you know it loaded and did not load those values. Which is fine as they tend to bloat a cube (The zeros). #missing is not going to load anyway and the E is exponential format of numbers whinch should not be a problem. Excel will display it this way, but you can format it without the E. You don't mention if you are doing this from EAS or MaxL and what version you are on. In Veraion 11, in EAS there are options in the load dialog to turn on or off the loading of zzeros and missing across the top. In MaxL, I don't see the syntax in the Tech reference, but I thought it was there in 9.

Similar Messages

  • Data load in Essbase ASO cube

    Hi,
    I have not been using ASO cube before and had worked only on BSO cubes. Now I have a requirement to create a rule file to load data in to an ASO Essbase cube. I have created a data load rule file as I was creating for a BSO cube which is correctly validating. However when I am doing the data load I am getting following warning:
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"
    I have investigated further and found that ASO cube does not allow data loading at upper levels & on members calculated through formulas. After this I have ensured that I am loading the data in to zero level members and members which are not calculated through formula. But still I am not able to do the data load & getting the same warning.
    Could you please help me and let me know if there is anything else which I am missing here?
    Thanks in advance...
    AKW

    Hi AKW,
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"This is only a warning message that means only those many cells were skipped might be for some reasons like any member pointing to those cells will be missing.
    If you want to copy the Data of your BSO cube to an ASO Application why dont you use an PARTIONING it will copy your whole data from BSO to ASO (If Outline is common in both then copy any member of Sparse dimension like "Scenario 1" from Source i.e. BSO, to same member like "Scenario 1" in Target i.e ASO ),
    This is only an alternate wayThanks
    Avneet Singh Bhatia

  • Data Load MAXLs in ASO

    Hi All,
    Greetings of the day !!!!
    Want to understand the difference between "Add values create slice" and "override values create slice" used in data loading MAXLs
    Suppose we initialized buffer and loaded data in buffer then we can use following two MAXLs
    1)
    import database AsoSamp.Sample data
    from load_buffer with buffer_id 1
    add values create slice;
    2)
    import database AsoSamp.Sample data
    from load_buffer with buffer_id 1
    override values create slice;
    Q1
    What i am thinking logically is if i am again loading the data in the same intersections from which slice is created ADD VALUE will add it and override value will overwrite it .... e.g if 100 was present earlier and we are again loading 200 then ADD will make 300 and overwrite will result 200. Let me know if my understanding is correct
    Q2
    Why do we use "create slice" ? What is the use? Is it for better performance for data loading? Is it compulsary to merge the slices after dataloading??
    Cant we just use add value or override values if we dont want to create slice...
    Q3
    I saw two MAXLs for merging also ... one was Merge ALL DATA and other was MERGE incremental data ... Whats the diff ? In which case we use what?
    Pls help me in resolving my doubts... Thanks a lot !!!!

    Q1 - Your understanding is correct. The buffer commit specification determines how what is in the buffer is applied to what is already in the cube. Note that there are also buffer initialization specifications for 'sum' and 'use last' that apply only to data loaded to the buffer.
    Q2 - Load performance. Loading data to an ASO cube without 'create slice' takes time (per the DBAG) proportional to the amount of data already in the cube. So loading one value to a 100GB cube may take a very long time. Loading data to an ASO cube with 'create slice' takes time proportional to the amount of data being loaded - much faster in my example. There is no requirement to immediately merge slices, but it will have to be done to design / process aggregations or restructure the cube (in the case of restructure, it happens automatically IIRC). The extra slices are like extra cubes, so when you query Essbase now has to look at both the main cube and the slice. There is a statistic that tells you how much time Essbase spends querying slices vs querying the main cube, but no real guidance on what a 'good' or 'bad' number is! See http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/aggstor_runtime_stats.html.
    The other reason you might want to create a slice is that it's possible to overwrite (or even remove, by committing an empty buffer with the 'override incremental data' clause in the buffer commit specification) only the slice data without having to do physical or logical clears. So if you are continually updating current period data, for example, it might make sense to load that data to an incremental slice.
    Q3 - You can merge the incremental slices into the rest of the cube, or you can merge multiple incremental slices into one single incremental slice, but not into the rest of the cube. Honestly, I've only ever wanted to use the first option. I'm not really sure when or why you would want to do the second, although I'm sure it's in there for a reason.

  • Wrong date value in Essbase ASO cube

    Hi All,
    I'm trying to load a date value in mm-dd-yy format into an Essbase ASO cube. I'm using is a txt tab delimited file. The load rule is working fine. The outline properties is set with the proper format "mm-dd-yy". I loaded the data and when I retrieve the data using Smart View I see all the dates decreased by one day in my Smartview report.
    Would you have any ideas why that is happening?
    Thanks

    this is a bug and fixed in 11.1.2

  • Data load from DSO to cube

    Hi gurus
    We have a typical problem, we have to combined 2 records into one when they reach to DSO.
    One source is flat file and the other is R3. so few fields I am geting from flat file and few from R3 . They are creating one record when they loaded to DSO . Now I am getting one record in active data table but when I load the data to cube from that DSO ( Data is going from change log to cube ) to cube , I am getting 2 seperate records ( one which is loaded from flat file and one which is loaded from R3 ) in cube which I dont want. I want only one record in cube just like i have in my active data table in DSO.
    I cant get the data from Active data table because I need delta load.
    Would you please advise what can I do to get that one record in Cube?
    Pl help

    Ravi
    I am sending the data thro DTP only but is there any solution to get one record bcos in other scenario I am getting data from 2 different ERP sources and getting one record in DSO and cube as well.
    but that is not happening for this second scenario where i am getting data from FLAT file and ERP and traing to create one record.

  • Data load from DSO to cube fails

    Hi Gurus,
    The data loads have failed last night and when I try to dig inside the process chains I find out that the cube which should be loaded from DSO is not getting loaded.
    The DSO has been loaded without errors from ECC.
    Error Message say u201DThe unit/currency u2018source currency 0currencyu2019 with the value u2018spaceu2019 is assigned to the keyfigure u2018source keyfigure.ZARAMTu2019 u201D
    I looked in the  PSA has about 50, 000 records .
    and all data packages have green light and all amounts have 0currency assigned
    I went in to the DTP and looked at the error stack it has nothing in it then I  changed the error handling option from u2018 no update,no reportingu2019 to u2018valid records update, no reporting (resquest red)u2019 and executed then the error stactk showed 101 records
    The ZARAMT filed has 0currency blank for all these records
    I tried to assign USD to them and  the changes were saved and I tried to execute again but then the message says that the request ID should be repaired or deleted before the execution. I tried to repair it says can not be repaired so I deleted it and executed . it fails and the error stack still show 101 records.when I look at the records the changes made do not exist anymore.
    If I delete the request ID before changing and try to save the changes then they donu2019t get saved.
    What should I do to resolve the issue.
    thanks
    Prasad

    Hi Prasad....
    Error Stack is request specific.....Once you delete the request from the target the data in error stach will also get deleted....
    Actually....in this case what you are suppose to do is :
    1) Change the error handling option to u2018valid records update, no reporting (resquest red)u2019 (As you have already done....) and execute the DTP......all the erroneous records will get sccumulated in the error stack...
    2) Then correct the erroneos records in the Error Stack..
    3) Then in the DTP----in the "Update tab"....you will get the option "Error DTP".......if it is already not created you will get the option as "Create Error DT".............click there and execute the Error DTP..........The Error DTP will fetch the records from the Error Stack and will create a new request in the target....
    4) Then manually change the status of the original request to Green....
    But did you check why the value of this field is blank....if these records again come as apart of delta or full then your load will again fail.....chcek in the source system and fix it...for permanent solution..
    Regards,
    Debjani......

  • Data loading from DSO to Cube

    Hi,
    I have a question,
    In book TBW10 i read about the data load from DSO to InfoCube
    " We feed the change log data to the InfoCube, 10, -10, and 30 add to the correct 30 value"
    My question is cube already have 10 value, if we are sending 10, -10 and 30 Values(delta), the total should be 40 instead of 30.
    Please some one explaine me.
    Thanks

    No, it will not be 40.
    It ll be 30 only.
    Since cube already has 10, so before image ll nullify it by sending -10 and then the correct value in after immage ll be added as 30.
    so it ll be like this 10-10+30 = 30.
    Thank-You.
    Regards,
    Vinod

  • Data load from DSO to Cube in BI7?

    Hi All,
    We just migrated a dataflow from 3.5 to 7 in development and moved to production. So till now in production, the dataloads happend using the infopackages.
    a. Infopackage1 from datasource to ODS and
    b. Infopackage2 from ODS to the CUBE.
    Now after we transported the migrated dataflow to production, to load the same infoproviders I use
    1. Infopackage to load PSA.
    2. DTP1 to load from PSA to DSO.
    3. DTP2 to load from DSO to CUBE.
    step1 and step2 works fine but when I run the DTP2 it is getting terminated. But now when I tried the step b (above), it loads the CUBE fine using the infopackage. So I am unable to understand why the DTP failed and why the infopackage load is successful. In order to use the DTP do we need to do any cleanup when using it for first time? Please let me know if you have any suggestions.
    Please note that the DSO already has data loaded using infopackage.  (Is this causing the problem?)
    Thanks,
    Sirish.

    Hi Naveen,
    Thanks for the Reply. The creation of DTP is not possible without a transformation.
    The transformation has been moved to production successfully.

  • Partial data loading from dso to cube

    Dear All,
    I am loading the data from dso to the cube through DTP. The problem is that some records are getting added  through data
    packet 1 to the cube around 50 crore records while through data packet2 records are not getting added.
    It is full load from dso to cube.
    I have tried deleting the request and executing DTP but again the same no of records get added to the cube through datapacket 1   and after that records are not added to the cube through datapacket 1  ;request remains in yellow state only.
    Please suggest .

    Nidhuk,
    Data load transfers package by package. Your story sounds like it got stuck in second package or something. Suggest you check package size, try to increase to a higher number to see if anything  changes ? 50 records per package kind of low, your load should not spread out into too many packages.
    Regards. Jen
    Edited by: Jen Yakimoto on Oct 8, 2010 1:45 AM

  • Error while data loading in real time cube

    HI experts,
    I have a problem. I am loading data from a flat file.The data is loading correctly till the DSO but when i am trying to load it into the cube it is giving an error.
    The cube is  a real time cube for PLANING. I have chnaged the status to allow data loading but still the DTP is giving an error.
    It shows an error "error while extracting from DataStore" and some RSBK 224 ERROR and rsar 051 error.

    What was the resolution to this issue.  We rae having the same issue only with external system (not a flat file).  We get the  RSAR 051 with a return code of 238 error message, like it is not even getting to the rfc connection (DI_SOURCE).  We have been facing this issue for a while and even opened up a message with SAP.

  • Data Load from DS-- DSO---- Cube

    Hello Guys,
    I have an issue with the data load.
    I use the OFI_GL_14 extractor.
    When I did he innit with data transfer I have around 450 K records.
    When I loaded to DSO from DS by DTP  it loaded the same 450 K .
    When I tried to load the data from DSO to Cube it selects only 14.5K records.
    It says transfered records 14.5 K and added records 14.5 K records.
    Even if it aggregatesat the cube level- it should say transfered 450 K records and 14.5 K records added. is it correct.
    I don't have any filters in the DTP.I checked it. I deleted the DTP and again created another new one. The same result.
    From DS to DSO is delta-DTP and again from DSO to Cube is delta - DTP.
    I am not sure why it filters and select only 14.5 records from 450 K records.
    Do you guys encountered this type of situation. Let me know if I am doing something wrong.
    Thanks again for your input. It is really appreciated .
    Senthil

    Are you using infosource in between? If you do, the infosource is aggregating the the data before it gets to the transformation, in that case you will see the transferred record will be less. If you check all the fields as a key field in the infosource, you will get the same record transferred to the transformation and data will be aggregated going to the cube.
    thanks.
    Wond

  • Data Load error ODS to Cube

    Hi
    Data Flow ODS-->Cube (Transformations)
    We replaced update rules with transformations.
    Fiscal Year Variant is mapped to Fiscal Year Variant
    FY Year mapped to FY Year
    FY Year/Period Mapped to FY Year/Period
    In Transformations we are getting Cal Month & Cal Year/Month from FY Year/Period  is routine.
    To get the First 4 Chars  and last 2 chars from  FY Year/Period  and to Cal Year/month using a routine
    Similaryly Cal Month as well from FY Year/Period  using routine.
    This is done in older Update rules as well...
    When tried to load data...i am gettying the below error
    "Fiscal Year Variant X is not maintained for Calendar lear 001"
    Please update how to fix this
    Thanks

    Nice copy/paste Mtl...
    when i post 'please search the forum'  this seems not tohelp
    when i post links to other posts, often my reply is removed as moderator don't won't to see a link collage
    therefor if a answer is available in previous post i copy/paste as for me this is only solution (however next time i'll mention the author)..i do not copy paste within the same thread hoping that the last post will receive the points...
    concerning the fact that i might not have red the question correctly...during all the time i have been using this forum  i have seen that question is not always the exact problem description...therefor i propose solutions even if on first glance they seem off the case....even more this error will only popup at the moment of checking master data. so if the dso is write optimized, or SID generation is turned off, the error will only show when loading to the cube and can be solved by transferrring the R/3 data. the fact that it concerns a load from dso to cube is then of no meaning.
    M.

  • How is data loaded into the BCS cubes?

    We are on SEM-BW 4.0 package level 13. I'm totally new to BCS from the BW view point. I'm not the SEM person but I support the BW side.
    Can anyone explain to me or point me to documentation that explains how the data gets loaded into cube 0BCS_C11 Consolidation (Company/Cons Profit Center? I installed the delivered content and I can see various export data sources that were generated. However I do not see the traditional update rules, infosources etc.
    The SEM person has test loaded some data to this cube and I can see the request under 'manage' and even display the content. However the status light remains yellow and data is not available for reporting unless I manually set the status to green.
    Also, I see on the manage tab under Info-Package this note: Request loaded using the APO interface without monitor log.
    Any and all assistance is greatly appreciated.
    Thanks
    Denny

    Hi Dennis,
    For reporting the virtual cube 0BCS_VC11 which is fed by 0BCS_C11 is used.
    You don't need to concern about the yellow status. The request is closed automatically after reaching 50000 records.
    About datastream - you right - the BW cube is used.
    And if your BW has some cubes with information for BCS on a monthly basis, you may arrange a load from a data stream.
    This BW cube I make as much similar to 0BCS_C11 by structure as possible -- for a smooth data load. The cube might be fed by another cube which contains information in another format. In update rules of the first cube you may transform the data for compatibility of the cubes structure.
    Best regards,
    Eugene

  • Optimize the data load process into BPC Cubes on BW

    Hello Gurus,
    We like to know how to optimize the data load process and our scenario for this is that we have ECC Classic Ledger,  and we are looking for the best way to load data into the BW Infocubes from an ECC source.
    To complement the question above, from what tables the data must be extracted and then parsed to BW so the consolidation it´s done ?  also, is there any other module that has to be considered from other modules like FI or EC-CS for this?
    Best Regards,
    Rodrigo

    Hi Rodrigo,
    Have you looked at the BW Business Content extractors available for the classic GL? If not, I suggest you take a look. BW business content provides all the business logic you will normally need to get data out of ECC and into BW for pretty much every ECC application component in existence: [http://help.sap.com/saphelp_nw70/helpdata/en/17/cdfb637ca5436fa07f1fdc0123aaf8/frameset.htm]
    Ethan

  • Loading data to Parent in ASO cube

    I created an Excel data file to test different load scenarios. One scenario was using a parent level in one of the dimensions. I expected the record to show up in the dropped records file but it doesn't. Does anyone know why it doesn't? Is there any way to tell that the record wasn't loaded? We are on v11.

    You get a warning message 1003055: "Aggregate storage applications ignore update to derived cells. [X] cells skipped".

Maybe you are looking for