How to confirm data loaded properly

Hi,
Could you please let me know how to confirm the data is loaded, where i can see the same, is there any preview data option is sytem 9.
Thanks,
sudhakar.

Hi,
It was loaded, I believe it was loaded using rule files.
I am in the process of testing, i know the application and database,
I need to confirm, whether the data loaded or not, plz tel me how i can firm that,
is there any preview data option. I know it is there in in v7.3,
Thanks,
sudhakar

Similar Messages

  • How to tune data loading time in BSO using 14 rules files ?

    Hello there,
    I'm using Hyperion-Essbase-Admin-Services v11.1.1.2 and the BSO Option.
    In a nightly process using MAXL i load new data into one Essbase-cube.
    In this nightly update process 14 account-members are updated by running 14 rules files one after another.
    These rules files connect 14 times by sql-connection to the same oracle database and the same table.
    I use this procedure because i cannot load 2 or more data fields using one rules file.
    It takes a long time to load up 14 accounts one after other.
    Now my Question: How can I minimise this data loading time ?
    This is what I found on Oracle Homepage:
    What's New
    Oracle Essbase V.11.1.1 Release Highlights
    Parallel SQL Data Loads- Supports up to 8 rules files via temporary load buffers.
    In an Older Thread John said:
    As it is version 11 why not use parallel sql loading, you can specify up to 8 load rules to load data in parallel.
    Example:
    import database AsoSamp.Sample data
    connect as TBC identified by 'password'
    using multiple rules_file 'rule1','rule2'
    to load_buffer_block starting with buffer_id 100
    on error write to "error.txt";
    But this is for ASO Option only.
    Can I use it in my MAXL also for BSO ?? Is there a sample ?
    What else is possible to tune up nightly update time ??
    Thanks in advance for every tip,
    Zeljko

    Thanks a lot for your support. I’m just a little confused.
    I will use an example to illustrate my problem a bit more clearly.
    This is the basic table, in my case a view, which is queried by all 14 rules files:
    column1 --- column2 --- column3 --- column4 --- ... ---column n
    dim 1 --- dim 2 --- dim 3 --- data1 --- data2 --- data3 --- ... --- data 14
    Region -- ID --- Product --- sales --- cogs ---- discounts --- ... --- amount
    West --- D1 --- Coffee --- 11001 --- 1,322 --- 10789 --- ... --- 548
    West --- D2 --- Tea10 --- 12011 --- 1,325 --- 10548 --- ... --- 589
    West --- S1 --- Tea10 --- 14115 --- 1,699 --- 10145 --- ... --- 852
    West --- C3 --- Tea10 --- 21053 --- 1,588 --- 10998 --- ... --- 981
    East ---- S2 --- Coffee --- 15563 --- 1,458 --- 10991 --- ... --- 876
    East ---- D1 --- Tea10 --- 15894 --- 1,664 --- 11615 --- ... --- 156
    East ---- D3 --- Coffee --- 19689 --- 1,989 --- 15615 --- ... --- 986
    East ---- C1 --- Coffee --- 18897 --- 1,988 --- 11898 --- ... --- 256
    East ---- C3 --- Tea10 --- 11699 --- 1,328 --- 12156 --- ... --- 9896
    Following 3 out of 14 (load-) rules files to load the data columns into the cube:
    Rules File1:
    dim 1 --- dim 2 --- dim 3 --- sales --- ignore --- ignore --- ... --- ignore
    Rules File2:
    dim 1 --- dim 2 --- dim 3 --- ignore --- cogs --- ignore --- ... --- ignore
    Rules File14:
    dim 1 --- dim 2 --- dim 3 --- ignore --- ignore --- ignore --- ... --- amount
    Is the upper table design what GlennS mentioned as a "Data" column concept which only allows a single numeric data value ?
    In this case I cant tag two or more columns as “Data fields”. I just can tag one column as “Data field”. Other data fields I have to tag as “ignore fields during data load”. Otherwise, when I validate the rules file, an Error occurs “only one field can contain the Data Field attribute”.
    Or may I skip this error massage and just try to tag all 14 fields as “Data fields” and “load data” ?
    Please advise.
    Am I right that the other way is to reconstruct the table/view (and the rules files) like follows to load all of the data in one pass:
    dim 0 --- dim 1 --- dim 2 --- dim 3 --- data
    Account --- Region -- ID --- Product --- data
    sales --- West --- D1 --- Coffee --- 11001
    sales --- West --- D2 --- Tea10 --- 12011
    sales --- West --- S1 --- Tea10 --- 14115
    sales --- West --- C3 --- Tea10 --- 21053
    sales --- East ---- S2 --- Coffee --- 15563
    sales --- East ---- D1 --- Tea10 --- 15894
    sales --- East ---- D3 --- Coffee --- 19689
    sales --- East ---- C1 --- Coffee --- 18897
    sales --- East ---- C3 --- Tea10 --- 11699
    cogs --- West --- D1 --- Coffee --- 1,322
    cogs --- West --- D2 --- Tea10 --- 1,325
    cogs --- West --- S1 --- Tea10 --- 1,699
    cogs --- West --- C3 --- Tea10 --- 1,588
    cogs --- East ---- S2 --- Coffee --- 1,458
    cogs --- East ---- D1 --- Tea10 --- 1,664
    cogs --- East ---- D3 --- Coffee --- 1,989
    cogs --- East ---- C1 --- Coffee --- 1,988
    cogs --- East ---- C3 --- Tea10 --- 1,328
    discounts --- West --- D1 --- Coffee --- 10789
    discounts --- West --- D2 --- Tea10 --- 10548
    discounts --- West --- S1 --- Tea10 --- 10145
    discounts --- West --- C3 --- Tea10 --- 10998
    discounts --- East ---- S2 --- Coffee --- 10991
    discounts --- East ---- D1 --- Tea10 --- 11615
    discounts --- East ---- D3 --- Coffee --- 15615
    discounts --- East ---- C1 --- Coffee --- 11898
    discounts --- East ---- C3 --- Tea10 --- 12156
    amount --- West --- D1 --- Coffee --- 548
    amount --- West --- D2 --- Tea10 --- 589
    amount --- West --- S1 --- Tea10 --- 852
    amount --- West --- C3 --- Tea10 --- 981
    amount --- East ---- S2 --- Coffee --- 876
    amount --- East ---- D1 --- Tea10 --- 156
    amount --- East ---- D3 --- Coffee --- 986
    amount --- East ---- C1 --- Coffee --- 256
    amount --- East ---- C3 --- Tea10 --- 9896
    And the third way is to adjust the essbase.cfg parameters DLTHREADSPREPARE and DLTHREADSWRITE (and DLSINGLETHREADPERSTAGE)
    I just want to be sure that I understand your suggestions.
    Many thanks for awesome help,
    Zeljko

  • How to design data load process chain?

    Hello,
    I am designing data load process chains for the first time and would like to get some general information on best practicies in that area.
    My situation is as follows:
    I have 3 source systems (R3 and two for which I use flat files).
    How do you suggest, should I define one big chain for all my loading process (I have about 20 InfoSources) or define a few shorter e.g.
    1. Master data R3
    2. Master data flat file system 1
    3. Master data flat file system 2
    4. Transaction data R3
    5. Transaction data file sys 1
    ... and execute one after another succesful end?
    Could you also suggest me any links or manuals on that topic?
    Thank you
    Andrzej

    Andrzej,
    My advise is to make separate chains for master & transaction data (always load in this order!) and afterwards make a 'master chain' where you insert these 2 chains one after the other (so: Start process -> Master data chain -> Transaction data chain).
    Regarding the separate chains; paralellize as much as possible (if functionally allowed). Normally, the number of parallel ('vertical') chains equals the nr of CPU's available (check with basis-person).
    Hope this provides you with enough info to start off with!
    Regards,
    Marco

  • How to Handle Data Loads in BW

    Hi friends,
               we are almost on the verge of completing the development, and are planning to go live very soon, but before that we need to make sure the number of records getting loaded (from R/3 to BW)are in sync.  If the user says that there are so many records for the Area currently, how do we handle this to happen in BW?? when the number of data packets keeps increasing how do we update the same to our targets,
    If there are any steps or procedures to habdle the data loads , after and before GOLIVE , can someone please let me know(Data sizing, Loading, and archiving),
    If the client tells that these many records are expected, how should we handle these situations??
    Your input is really appreciated.
    Thanks alot in advance,

    Hi,
    Best way to ensure that is after your data has been loaded to BW, run a report and verify the results. They should match per the logic or any filter criteria with the R3 area numbers.
    In most cases the number of records might not match so doing a comparision of how many records were sent and how many got uploaded through the monitor will not be a good comparision.
    For increasing data volumes go through the following thread on Performance tuning :
    performance tuning
    Data Reconciliation :
    Reconciliation of a BW and R3 data
    Re: Reg -- reconciliation process.
    Cheers,
    Kedar

  • How to make data loaded into cube NOT ready for reporting

    Hi Gurus: Is there a way by which data loaded into cube, can be made NOT available for reporting.
    Please suggest. <removed>
    Thanks

    See, by default a request that has been loaded to a cube will be available for reporting. Bow if you have an aggregate, the system needs this new request to be rolled up to the aggregate as well, before it is available for reporting...reason? Becasue we just write queries for the cube, and not for the aggregate, so you only know if a query will hit a particular aggregate at its runtime. Which means that if a query gets data from the aggregate or the cube, it should ultimately get the same data in both the cases. Now if a request is added to the cube, but not to the aggregate, then there will be different data in both these objects. The system takes the safer route of not making the 'unrolled' up data visible at all, rather than having inconsistent data.
    Hope this helps...

  • AWM Newbie Question: How to filter data loaded into cubes/dimensions?

    Hi,
    I am trying to filter the amount of data loaded into my dimensions in AWM (e.g., I only want to load like 1-2 years worth of data for development purposes). I can't seem to find a place in AWM where you can specify a WHERE clause...is there something else I must do to filter data?
    Thanks

    Hi there,
    Which release of Oracle OLAP are you using? 10g? 11g?
    You can use database views to filter your dimension and cube data and then map these in AWM
    Thanks,
    Stuart Bunby
    OLAP Blog: http://oracleOLAP.blogspot.com
    OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
    OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
    DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html

  • How to find Data load time ?

    Hi,
    Where do i look for the Total data load time ? from data source to PSA ?

    Hi Honar,
    1) Goto monitor of IP, in header tab you can find the runtime of IP.
    2) So you are loading data from source to BW.In IP header tab copy the req number, goto the source systems from which data is loading.
    goto SM37 give the request number with BI as pre-fix, you will find the total run time of job with job log.
    Hope this helps.

  • How to control data load in Info-Package using ABAP?

    Hello Gurus:
    I am trying to resolve couple of issues. 
    1.  I need to load data DAILY Full Load into a Planning cube.  I have the 0NETDUEDATE   for selection in the
         Info-package.  I am getting data from anothe base A/R cube.  I want to Load Daily ONLY those records
         for which the "0NETDUEDATE" is Greater than "System Date".  I think the logic would be IF 0NETDUEDAT
         .GT. SYS-DATUM, bruing the record. What should be the statement in 'Result' for keeping the record? 
         (I don't know ABAP, so need help here...!)
    2.  Similalry, before loading the data above, I want to delete all the existing records from Planning cube where
         0NETDUEDATE is GT System date (records loaded the previosu day need to be deleted as the amounts may
         have changed).  How can I achieve this selective deletion automatically  in PC??
    Appreciate your feedback very much.
    Thanks... SMaa

    Hi Shruti,
    1) If i understand your requirement correctly you need to load only data which has 0NETDATE greater then sydatum.
    So you can do this in infopackage .Infopackage>Data selection->0NETDATE(field)-->type(colum).here you can selecy ABAP type(6).Now you create ABAP code in this.
    Here in ABAP code you need to specify high and low range ,Also the relation character like GT or BT(between).
    i guess some how ur code will look like this
    $$ begin of global - insert your declaration only below this line  -
    TABLES: ...
    DATA:   ...
    $$ end of global - insert your declaration only before this line   -
        InfoObject      = 0NETDUEDATE
        Fieldname       = NETDUEDATE
        data type       = NUMC
        length          = 0000010
        convexit        = PERI6
    form compute_NETDUEDATE
      tables l_t_range structure rssdlrange
      changing p_subrc like sy-subrc.
          Insert source code to current selection field
    $$ begin of routine - insert your code only below this line        -
    data: l_idx like sy-tabix.
              l_idx = sy-tabix.
              l_t_range-low = sy-datum.
              l_t_range-option = 'GT'.
              modify l_t_range index l_idx.
              p_subrc = 0.
    $$ end of routine - insert your code only before this line         -
    endform.
    i am not sure about coding but you can surely do it it is like specifying the ranges dynamically(you need to try it out)
    (also as mentioned by others you can do this in traansformation routines)but infopackage routine you can directly write in PROD system.
    2) And regarding the Selective deletion you donu2019t have nay process type in Process chain which does this
    But there is function module u201C rsdri_infoprov_delete " which can be used to develop a program( I guess you must be having a existing ZProgram in ur system already available because this is used commonly.
    Other wise you need to develop a ZProgram for this.
    Thanks and Regards
    Arun

  • How to upload data through excel into ssm

    Dear SSM Gurus,
    I am new to strategy management ,PAS Model I have created successfully based on requirement given by our business unit.
    Business unit (FMCG product dealer) have 5 region , and 2 accounts under which we have multiple stores ,
    I have sales data in excel sheet by store monthly(actual & target)
    I have created an Link to excel sheet which has following format
    Time     Name of the store     sales_act     sales_target
    Jan-09     Store1 region 1 account1      290     300
    Feb-09     Store1 region 1 account1      320     350
    Please suggest how to upload excel data into Actual and traget variables (suggest me IDQL commands).
    Regards
    Arif

    Dear Arif ,
    For loading the data from an excel sheet , using a procedure is very handy.If possible , create a separate procedure for loading the targets. I am giving you a rought transcript of a procedure which loads data from an excel sheet. Please make the appropriate changes where ever necessary :
    ....Clear all previous selections of dimensions and variables
    clear status
    ...Set period range for the data to be loaded
    set period 2009
    ....Selecting dimensions and variables for data mapping
    SET DATE DMY
    select dim STORE input
    ....Select the variables to be loaded
    select var sales_act
    ....Set across and down layout to match source data layout
    across  var down  STORE,time
    ACCESS EXTERNAL
    USE 'C:\actual_data.csv'
    DESCRIPTION FREE ,
    Peek only 10
    .....Reading the data into the model
    read
    .....To end access system
    end
    Please check if the data loads properly at the input level. If this is successful , then we can consolidate the data and then you can get the data at other levels as well.
    Hope this helps
    Vijay

  • Troubleshooting 9.3.1 Data Load

    I am having a problem with a data load into Essbase 9.3.1 from a flat, pipe-delimited text file, loading via a load rule. I can see an explicit record in the file but the results on that record are not showing up in the database.
    * I made a special one-off file with the singular record in question and the data loads properly and is reflected in the database. The record itself seems to parse properly for load.
    * I have searched the entire big file (230Mb) for the same member combination, but only come up with this one record, so it does not appear to be a "last value in wins" issue.
    * Most other data (610k+ rows) appears to be loading properly, so the fields, in general, are being properly parsed out in the load rule. Additionally, months of a given item are on separate rows, and other rows of the same item are loading properly and being reflected in the database. As well as other items are being loaded properly in the months where this data loads to, so, it is not a metadata-not-existing issue.
    * The load is 100% successful according to the non-existent error file. Also, loading the file interactively results in the file showing up under "loaded successfully" (no errors).
    NOTE:
    The file's last column does contain item descriptions which may include special characters including periods and quotes and other special characters. The load rule moves the description field to the earlier in the columns, but the file itself has it last.
    QUESTION:
    Is it possible that the a special character (quote??) in a preceding record is causing the field parsing to include the CR/LF, and therefore the next record, into one record? I keep thinking that if the record seems to fine alone, but is not fine where it sits amongst other records, that it may have to do with preceding or subsequent records.
    THOUGHTS??

    Thanks Glenn. I was too busy looking for explicit members that I neglected thinking through implicit members. I guess I was thinking that implied members don't work if you have a rules file that parses out columns...that a missing member would just error out a record instead of using the last known value. In fact, I thought that (last known value) only worked if you didn't use a load rule.
    I would prefer some switch in Essbase that requires keys in all fields in a load rule or allows last known value.

  • Incremental Data load in SSM 7.0

    Hello all,
    I once raised a thread in SDN which says how to automate data load into SSM 7.0.
    Periodic data load for a model in SSM
    Now my new requirement is not to upload the whole data again , but only the new data (data after the previous data load) . Is there a way to do the incremental data load in SSM 7.0 ? Loading the whole of the fact data again and again will take a hit on the performance of the SSM system. Is there a work around in case there is no solution ?
    Thanks
    Vijay

    Vijay,
    In your PAS model you can build a procedure to remove data and then load that data to the correct time period.
    In PAS, to remove data but not the variable definitions from the database:
    Removing data for a particular variable
    REMOVE DATA SALES
    or if there were particular areas only within
    SELECT product P1
    SELECT customer C1
    REMOVE SELECTED SALES
    or remove all data
    REMOVE DATA * SURE
    or just a time period
    REMOVE DATA SALES BEFORE Jan 2008
    Then you would construct or modify your Load Procedure to load the data for the new period
    SET PERIOD {date range}
    Regards.
    Bpb
    Then would

  • BPC and Data Loads

    Hi all,
    I'm sure i've read an article showing me how to automate data loads, (transactional) from the BW into BPC. Does anyone know where i can find the document.
    I also would like to know what the function is called. i was sure it was load_infprovider.

    All of the BPC related How-to Guides are here:
    https://wiki.sdn.sap.com/wiki/display/BPX/Enterprise+Performance+Management+%28EPM%29+How-to+Guides
    You can read about the standard Data Manager package to [import transaction data from an InfoProvider|http://help.sap.com/saphelp_bpc70sp02/helpdata/en/af/36a94e9af0436c959b16baabb1a248/content.htm] in the SAP help documentation at
    http://help.sap.com
    SAP Business User > EPM Solutions > Business Planning and Consolidation
    [Jeffrey Holdeman|https://www.sdn.sap.com/irj/sdn/wiki?path=/display/profile/jeffrey+holdeman]
    SAP BusinessObjects
    Enterprise Performance Management
    Regional Implementation Group

  • How to identify the status of the data load

    Hi All,
    Here is my requiremenet,
    I have a process called etl_pf2 which loads the data into staging tables. Now I have another process that does a partition exchange and moves the data from staging tables to online tables.
    In one procedure, I need to verify whether any data load into the staging tables is happings. If yes then partiontion should not occur, if no then movement should happen.
    Any idea on any parameter which can be used in the procedure to check the status of the data load in staging tables.
    Please help me.

    Thanx for reply
    But i thhink that the problem is with NQSServer which crashes but not disappears from the processes list.
    I tried to search obiee+USER_MEM_ARGS and nqsserver+USER_MEM_ARGS but found only this thread.
    How can JVM settings help to nqsserver.exe to work properly?

  • How to View the Loaded Data

    Hi,
    I have loaded the data into an ODS from an External Flat file and executed the job. Can anyone guide me how to view the loaded data in the ODS ?
    Thanks

    Hi Madhu,
    You can simply Goto Transaction LISTCUBE and give the ODS name -> Execute.
    Regards
    Hemant

  • How to create a report in bex based on last data loaded in cube?

    I have to create a query with predefined filter based upon "Latest SAP date" i.e. user only want to see the very latest situation from the last load. The report should only show the latest inventory stock situation from the last load. As I'm new to Bex not able to find the way how to achieve this. Is there any time characteristic which hold the last update date of a cube? Please help and suggest how to achieve the same.
    Thanks in advance.

    Hi Rajesh.
    Thnx fr ur suggestion.
    My requirement is little different. I build the query based upon a multiprovider. And I want to see the latest record in the report based upon only the latest date(not sys date) when data load to the cube last. This date (when the cube last loaded with the data) is not populating from any data source. I guess I have to add "0TCT_VC11" cube to my multiprovider to fetch the date when my cube is last loaded with data. Please correct me if I'm wrong.
    Thanx in advance.

Maybe you are looking for