Data loading doubts

Hi Gurus
Just trying to join the pieces to understand the system more accurately.Would you pl help m to clear my following doubts?
1.what is RSRV testing and where can we use it? How to do it?
2.From where an we schedule the process chain ? Can I have the path to schedule it? And if we are scheduling process chain then is there a need to schedle the info pkg?
3.What is time stamp and how it works ?
Thanks in advance
KK

Hi Krishna,
It is not reqd to do this after every data load. But lets say you want to see the  number of records in the cube tables (for performance tuning) then you can use an RSRV test: RSRV > Elementary Tests > database > Database information about InfoProvider tables.
Or sometimes the SIDs of InfoObject values are corrupted and you can try to repair with the RSRV test for Master data.
If you open up eahc test in RSRV, you can click on the Help button (rightmost) and read a description.
Hope this helps...

Similar Messages

  • Master data Load doubt

    Hi All,
    We can load Master data with Infosource with Direct Update and Infosource with Flexibel update.
    While doing Insource with direct update our communication Structure will be act as datatarget. If we load by use of direct update report is not possible for Master data.
    While doing Flexible update Reporting is possible from master data.
    BW is mainly focussed on Analyzing data and taking report only, for this case why they have given Direct update method.
    Whatz the befit of using this Infosource with direct update method.
    Need more clarification on this.
    Regards,
    Arun.M.D

    Dear,
    Master data can be stored in 2 places
    1. in Infoobject itself
    2. in a data target (ODS)
    We use Direct Infosource when we want to store the master data in the infoobject itself
    We use Flexible Infosource when we want to store the master data in the data target like ods.
    Hence, when master data is stored in data targets, it is available for reporting.
    If your data is stored in infoobject through direct infosource, still you can use it as data target by setting the option for that infoobject as data target( in Master data tab).
    Thanks

  • Regarding MM Data Loads.

    Hi All,
    I am loading data for MM Cubes, but found that many data sources like 2LIS-02_CGR, 2LIS-02_SCN, 2LIS-02_SGR are blank and not having any data. I am doubting whether I forgot something to be done in R/3 to get the data. I have populated the setup tables also.
    Is there any sequence of data loading through these data sources? As few data sources like 2LIS_02_SCL the data transferred is in Lakhs of records but added 0 records.
    Please help me. I will assign points.
    Thanks and Regards,
    Sangini.

    Hi Sangini,
    Refer to following link
    http://help.sap.com/saphelp_nw04/helpdata/en/8d/bc383fe58d5900e10000000a114084/frameset.htm
    It refers to SAP std sequence of loading data through process chain .
    Hope it helps.
    Regards
    Mr Kapadia

  • Data loading in Hyperion Web Forms

    Hi,
    I am new to Hyperion Planning technology.
    I have few doubts regarding Meta data and data loading.
    I am aware that we can load data into Essbase cubes and see it from Web Forms.
    Can we load data directly into webforms?
    Can we load meta data into essbase? or loading meta data through HAL to hyperion planning application?
    I understand that we can load data into Essbase and not clear whether we can load data into Hyperion Planning and also metadata..
    I request you to clarify me this asap.
    Thanks alot in advance for your help..
    Regards,
    Upendra.

    Hi,
    To answer some of your questions :-
    Can we load data directly into webforms?
    {color:#ff0000}The data in webforms is pulled directly from essbase, you can load the data into essbase or enter through the webforms or use something like Smartview.{color}
    Can we load meta data into essbase? or loading meta data through HAL to hyperion planning application?
    {color:#ff0000}If it is a planning application then the metadata has to go into planning first and is then pushed down to essbase, you can enter metadata directly into planning or use ODI or HAL (if you are licensed to use it) or if your brave then EPMA.{color}
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Incremental Data loading in ASO 7.1

    HI,
    As per the 7.1 essbase dbag
    "Data values are cleared each time the outline is changed structurally. Therefore, incremental data loads are supported
    only for outlines that do not change (for example, logistics analysis applications)."
    That means we can have the incremental loading for ASO in 7.1 for the outline which doesn't change structurally. Now what does it mean by the outline which changes structurally? If we add a level 0 member in any dimension, does it mean structrual change to that outline?
    It also syas that adding Accounts/Time member doesn't clear out the data. Only adding/deleting/moving standard dimension member will clear out the data. I'm totally confused here. Can anyone pls explain me?
    The following actions cause Analytic Services to restructure the outline and clear all data:
    ● Add, delete, or move a standard dimension member
    ● Add, delete, or move a standard dimension
    ● Add, delete, or move an attribute dimension
    ● Add a formula to a level 0 member
    ● Delete a formula from a level 0 member
    Edited by: user3934567 on Jan 14, 2009 10:47 PM

    Adding a Level 0 member is generally, if not always, considered to be a structural change to the outline. I'm not sure if I've tried to add a member to Accounts and see if the data is retained. This may be true because by definition, the Accounts dimension in an ASO cube is a dynamic (versus Stored) hierarchy. And perhaps since the Time dimension in ASO databases in 7.x is the "compression" dimension, there is some sort of special rule about being able to add to it -- although I can't say that I ever need to edit the Time dimension (I have a separate Years dimension). I have been able to modify formulas on ASO outlines without losing the data -- which seems consistent with your bullet points below. I have also been able to move around and change Attribute dimension members (which I would guess is generally considered a non-structural change), and change aliases without losing all my data.
    In general I just assume that I'm going to lose my ASO data. However, all of my ASO outlines are generated through EIS and I load to a test server first. If you're in doubt about losing the data -- try it in test/dev. And if you don't have test/dev, maybe that should be a priority. :) Hope this helps -- Jason.

  • Data load process for FI module

    Dear all,
    We are using BI7.00 and in one of our FI data source 0EC_PCA_1 we had data load failure, the cause for the failure was analysed and we did the following
    1) deleted the data from cube and the PSA
    2) reloaded (full load) data - without disturbing the init.
    This solved our problem. Now when the data reconciliation is done we find that there are doubled entries for some of the G/L codes.
    I have a doubt here.
    Since there is no setup table for FI transactions (correct me if i am wrong), the full load had taken the data which was also present in the delta queue and subsequently the delta load had also loaded the same data
    (some g/l which was available as delta).
    Kindly provide the funtioning of FI data loads. Should we go for a Down time and how FI data loads works without setup tables.
    Can experts provided valuable solution for addressing this problem. Can anyone provide step by step process that has to be adopted to solve this problem permenantly.
    Regards,
    M.M

    Hi Magesh,
    The FI datasources do not involve Setup tables while performing full loads and they do not involve outbound queue during delta loads.
    Full load happens directly from your datasource view to BI and delta is captured in the delta queue.
    Yes you are right in saying that when you did a full load some of the values were pulled that were also present in the delta queue. Hence you have double loads.
    You need to completely reinitialise as the full load process is disturbed. Taking a down time depends on how frequent the transactions are happening.
    You need to.
    1. Completely delete the data in BW including the initialisation.
    2. Take a down time if necessary.
    3. Reintialise the whole datasource from scratch.
    Regards,
    Pramod

  • Number of parallel process definition during data load from R/3 to BI

    Dear Friends,
    We are using Bi7.00. We have a requirement in which i should increase the number of parallel process during data load from R/3 to BI.  I want to modify this for a particular data source and check.Can experts provide helpful answers for the following question.
    1) When load is taking place or have taken place, where can we see how many parallel process that particular load has taken.
    2) Where should i change the setting for the number of parallel process for data load (from R/3 to BI) and not within BI.
    3) How system works and what will be net result of increasing or decreasing the number of parallel process.
    Expecting Experts help.
    Regards,
    M.M

    Dear Des Gallagher,
    Thank you very much for the useful information provided. The following was my observation.
    From the posts in this forum, i was given to understand that the setting for specific data source can be done in the infopackage and DTP level, i carried out the same and found that there is no change in the load, i.e., system by default takes only one parallel process even though i maintained 6.
    Can you kindly explain about the above mentioned point. i.e.,
    1) Even though the value is maintained in the infopackage level , will system consider it or not. -> if not then from which transaction system is able to derive the 1 parallel process.
    Actually we wanted to increase the package size but we failed because i could not understand what values have to be maintained  -> can you explain in detail
    Can you calrify my doubt and provide solution?
    Regards,
    M.M

  • Data load error in 0EC_PCA_1

    Dear all,
    In the data source 0EC_PCA_1 during delta load, on one day the load has failed in the PSA due to TRFC connection problem. Subsequently loading on all other days had taken place.The load had failed on 17.04.2008 - subsequent load had been updated to the cube.
    Now when data reconciliation is done, there seems to be mismatch in the data. Can anyone clear my following doubts.
    1) if data (delta) is failed on a particular day, the subsequent correct loads will fetch the failed data also or not.
    If the data (failed) will not be fetched during the subsequent loads then how to load that particular delta alone to the cube.
    Kindly provide steps and also the pricipal involved in the delta load mechanism.
    Experts help expected.
    Regards,
    M.M

    Dear Mr. Simon,
    Thank you on the onset for the prompt reply and sorry for the late response. The principle in which the 0EC_PCA_1 data source works had been explained by you. In case of data load failures you had suggested to do repair load in the ODS.
    My problem is that the load has failed in the PSA. Is it the cause of the missing data. i.e., if data load (delta) has failed on a particular day the subsequent day's correct load will not capture the left over delta that has failed - this is where i am bit confused on the genral priciple of the delta load mechanism.
    Can you kindly suggest methods with steps to do reload in case the data load to the  PSA will not capture the left over failed delta?
    Your inputs and updation will be very helpful. Kindly also try to explain the delta mechanism.
    Regards,
    M.M

  • Regarding ERPI Data Loading

    Dear All,
    I have few doubts on ERP Integrator.
    1) What are things required from Oracle GL to Planning for Data Loading using ERP Integrator? (Trail Balance is enough or we required some other file from Oracle GL)
    2) Is there any scheduling options available for Data loading using ERP Integrator?
    3) what is process for loading the data to Planning using ERP Integrator?
    4) How we load the data to Planning? (i.e. monthly load, hourly load)
    Anyone please guide me in this situation.
    Thanks,
    PC

    1) What are things required from Oracle GL to Planning for Data Loading using ERP Integrator? (Trail Balance is enough or we required some other file from Oracle GL)
    Assuming you have the right version of Oracle EBS, ERP Integrator queries the tables within the Oracle EBS database to get the appropriate information. In my case, the trail balance file was enough. Within the trail balance file you will have the appropriate dimension intersection (account, entity, period, etc.), the type of account (asset vs. liability, etc.) and finally the dollar amount.
    2) Is there any scheduling options available for Data loading using ERP Integrator?
    Yes. You can use FDQM to map and validate the data, then use the FDQM batch scheduler to load the data via command line or you can use the FDQM batch scheduler as well.
    3) what is process for loading the data to Planning using ERP Integrator?
    I'll try to do my best to summarize. (Assuming you are using FDQM) Create rules in ERPi -> Configure the adapters in the Workbench Client for the ERPi Rules -> Configure the FDQM Web Client to call the Adapters set in the Workbench Client -> Import the data into FDQM. Then from here you can call your command line automation for batching if you wish.
    4) How we load the data to Planning? (i.e. monthly load, hourly load)
    This depends on your business. Assuming you are going to load the data for budget and planning purposes then maybe your business is happy with a monthly load (and most of the time this is the case). An hourly load might be helpful if you deal with users that need up to date actuals. Loading hourly acutals data might be an overkill for a budget or planning application, but I have ran into situations where this is needed, but then find myself worried about speeding up the calculations after the data is loaded. Long store short you can load monthly or hourly.

  • Inventory data loads

    Hi All,
    I am new to BI and I have dought about inventory data loads u201Ccan I load BF data source with out loading BX data source and compress with marker update? Yes or NO; if yes justify reasons.
    Regards,
    Krish

    Hi,
    Read the document "How to handle Inventory Management Scenarios in BW (NW2004)" at http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328?quicklink=index&overridelayout=true
    This will clear all your doubts.

  • PAS Data Load  iin Cube Builder Dimensions

    Hi All,
      Two important questions about SSM Implementation , that are impacting our development
      1 - Is it possible to develop a data load to fill dimensions created in Cube Builder ? For example, the user wants to be able to fill dimensions manually in SSM BUT      wants to load some aditional information via data load un the same Cube created via Cube Builder Tool  What I see is that, when we fill the dimension manually in Cube Builder a internal code is created in PAS Database
          INPUT
          L0M1281099797397 '001'
          but I am in doubt about to reprocude the same code via data load in a PAS Procedure
      2 - My customer in his original system maintains a relationship between Iniatitives Is it possibile to do the same in SAP SSM ? I looked for the documentation anda I had not found anyting
          associated with this
      Regards,
         Cristian

    Hi Cristian,
    Jus for clarification: do you want to modify a dimension that was created through Cube Builder? Or do you want to create and maintain a new dimension without going through Cube Builder?
    If you are trying to create a new dimension, how many times would this dimension changed? And would the change be made manually or would the structure be available in a table?
    I usually would suggest to have the dimension structure in a table and use PAS procedures to create/re-create the dimension. You can assign whatever technical name and label you want, and as long as you maintain the same technical name for each member, you should be able to recreate dimensions without losing any data.
    If this is a dimension that you won't change anymore you can also code it directly in PAS with the structure you find in the other dimensions:
    INPUT
    input_member1_technical_name 'input_member1-label',
    input_member2_technical_name 'input_member2-label',
    input_member3_technical_name 'input_member3-label',
    input_member4_technical_name 'input_member4-label'
    OUTPUT
    output_member1_technical_name 'output_member1-label',
    output_member2_technical_name 'output_member2-label',
    RESULT
    result_member_technical_name 'result_member-label'
    output_member1_technical_name = SUM
    input_member1_technical_name 'input_member1-label',
    input_member2_technical_name 'input_member2-label'
    output_member2_technical_name = SUM
    input_member3_technical_name 'input_member3-label',
    input_member4_technical_name 'input_member4-label'
    result_member_technical_name = SUM
    output_member1_technical_name,
    output_member2_technical_name
    Best regards,
    Ricardo Vieira

  • Data loaded but request is in red

    Hi Friends,
    i have an ODS and Cube , the data gets loaded into ODS initially and then into Cube through that ODS. every thing happens thorough process chain. yesterday the ODS data loaded and available for reporting but the request is in Red color . it has failed to load the data into further to cube. my doubt is if the data loaded and available for reporting but Red request will give any problem? . i have tried to change the QM status but it did not allow to change. please guide me on this.. max points will be awarded.
    Thanks,
    Prasad

    Hi ...
    I think that the request in ur ODS is partially activated.......bcoz QM status red .......but available for reporting only happens when request ........you cannot change the status manually......since it is partially activated...........
    If the load is full upload........then delete the request from the target ..and reload it.........
    But if it is a delta load........then change the QM status of the IP to red in RSMO.........In the target level......I mean in the Manage screen it will not allow you to change....for full upload also its better to change the QM status in red......then delete it and again reload it.........
    Is there is any ODS activation step in ur PC.......if not check the settings of the ODS in RSA1........in the setting tab there is a check bob Activate Data Automatically.........it is checked or not......if it is checked it means after data get loaded in the ODS.....it will get activated automatically.................But when you are using a  PC..........this is not a good practice..........its better to keep a seperate ODS activation step in the PC..........
    Hope this helps......
    Regards,
    Debjani.......

  • Data Loading Problem In BI

    Hi,
    here is very simple question; i have one Falt file only 2 fields Country and city such as :
    Andorra,Andorra la Vella 
    Angola,Luanda 
    Anguilla,The Valley 
    Antigua and Barbuda,Saint John's (Antigua) 
    Argentina,Buenos Aires 
    Armenia,Yerevan 
    Aruba,Oranjestad 
    Australia,Canberra 
    Austria,Vienna 
    Azerbaijan,Baku (Baki) 
    Bahrain,Manama 
    Bangladesh,Dhaka 
    Barbados,Bridgetown 
    When I upload the file, it just show only country not cities...
    schreen shot in here
    http://www.flickr.com/photos/25222280@N03/3277198674/sizes/o/
    http://www.flickr.com/photos/25222280@N03/3277203778/sizes/o/
    http://www.flickr.com/photos/25222280@N03/3277220212/sizes/o/
    http://www.flickr.com/photos/25222280@N03/3276411379/sizes/o/
    please is anyone tell me where i am doing mistake...
    Thanks

    Hi TK....
    U hav to map both the fields............ie both Country and city.............but I hav one doubt..........
    is City is the compounding characteristic of Country............if so u hav to load Compound Charateristic seperately........it is independent of the Master data loads.......but it need to be mapped.......
    Regards,
    Debjani.......

  • Data Load MAXLs in ASO

    Hi All,
    Greetings of the day !!!!
    Want to understand the difference between "Add values create slice" and "override values create slice" used in data loading MAXLs
    Suppose we initialized buffer and loaded data in buffer then we can use following two MAXLs
    1)
    import database AsoSamp.Sample data
    from load_buffer with buffer_id 1
    add values create slice;
    2)
    import database AsoSamp.Sample data
    from load_buffer with buffer_id 1
    override values create slice;
    Q1
    What i am thinking logically is if i am again loading the data in the same intersections from which slice is created ADD VALUE will add it and override value will overwrite it .... e.g if 100 was present earlier and we are again loading 200 then ADD will make 300 and overwrite will result 200. Let me know if my understanding is correct
    Q2
    Why do we use "create slice" ? What is the use? Is it for better performance for data loading? Is it compulsary to merge the slices after dataloading??
    Cant we just use add value or override values if we dont want to create slice...
    Q3
    I saw two MAXLs for merging also ... one was Merge ALL DATA and other was MERGE incremental data ... Whats the diff ? In which case we use what?
    Pls help me in resolving my doubts... Thanks a lot !!!!

    Q1 - Your understanding is correct. The buffer commit specification determines how what is in the buffer is applied to what is already in the cube. Note that there are also buffer initialization specifications for 'sum' and 'use last' that apply only to data loaded to the buffer.
    Q2 - Load performance. Loading data to an ASO cube without 'create slice' takes time (per the DBAG) proportional to the amount of data already in the cube. So loading one value to a 100GB cube may take a very long time. Loading data to an ASO cube with 'create slice' takes time proportional to the amount of data being loaded - much faster in my example. There is no requirement to immediately merge slices, but it will have to be done to design / process aggregations or restructure the cube (in the case of restructure, it happens automatically IIRC). The extra slices are like extra cubes, so when you query Essbase now has to look at both the main cube and the slice. There is a statistic that tells you how much time Essbase spends querying slices vs querying the main cube, but no real guidance on what a 'good' or 'bad' number is! See http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/aggstor_runtime_stats.html.
    The other reason you might want to create a slice is that it's possible to overwrite (or even remove, by committing an empty buffer with the 'override incremental data' clause in the buffer commit specification) only the slice data without having to do physical or logical clears. So if you are continually updating current period data, for example, it might make sense to load that data to an incremental slice.
    Q3 - You can merge the incremental slices into the rest of the cube, or you can merge multiple incremental slices into one single incremental slice, but not into the rest of the cube. Honestly, I've only ever wanted to use the first option. I'm not really sure when or why you would want to do the second, although I'm sure it's in there for a reason.

  • Master data load for 0COSTCENTER (Cost Center) failing

    Hi Experts
    I have a master data load for 0COSTCENTER (Cost Center). The Load has started failing from a couple of days at DTP.
    (R) Filter Out New Records with the Same Key
          (G) Starting Processing...
          (R) Dump: ABAP/4 processor: DBIF_RSQL_SQL_ERROR
    I am unable to understand the reason for this failure. I tried loading the data 1 costcenter at a time it is still failing so i doubt if its a internal table storage issue as suggested by the dump.
    Could you help me on this one please.
    Regards
    Akshay Chonkar

    Thanks All
    I have got the issue resolved
    The Error Stack for the DTP had accumulated so many entries that it was unable to process further.
    So i deleted the entries in Error Stack usin SE14. Then again executed my DTP.
    Everything is fine now thanks for your help.
    Regards
    Akshay Chonkar

Maybe you are looking for