Data loading in implementation

Hi gurus,
In implementation-(kindly read the whole thing to understand my doubt)
1.Is client 300 is the only client in BW Development server r else do we have 200,220 as we have in R/3 Developmentserver along with 300.
2. Do we activate all the Infoproviders and Infosources (Business Content) r custom made in Development system BD1 and then transport them to QA and PRD subsequently.
3.If yes where do we load data is it in Development BW from Development R/3 to test our activated objects. r else  is it that we don;t at all load data in Dev 300 client.
4. If we load data in Dev BW from Dev R/3 then when transporting them to QA do we need to delete the data from the infocubes and ODS and then again load from R/3QA to QABW to test the objects.
i think this data (master r transactional ) is only for test purpose which is in R/3 QA AND DEV.
am i right plz clarify my doubtrs.
thanks
harish

Hi Harish,
      2. You need to activate all info providers and info source, and developments all things you need to do at development system only. After that just transport all things to you QA and PD system. There it’s no need to once again to activate all things at QA  and PD.
   3. You can load get the test data from r/3 dev or qa it's no problem..
   4. It’s no need to delete the from data marts before transporting the data marts from dev to QA and Production system.
Regards,
PRK

Similar Messages

  • Data Load process for 0FI_AR_4  failed

    Hi!
    I am aobut to implement SAP Best practices scenario "Accounts Receivable Analysis".
    When I schedule data load process in Dialog immediately for Transaction Data 0FI_AR_4 and check them in Monitor the the status is yellow:
    On the top I can see the following information:
    12:33:35  (194 from 0 records)
    Request still running
    Diagnosis
    No errors found. The current process has probably not finished yet.
    System Response
    The ALE inbox of BI is identical to the ALE outbox of the source system
    or
    the maximum wait time for this request has not yet been exceeded
    or
    the background job has not yet finished in the source system.
    Current status
    No Idocs arrived from the source system.
    Question:
    which acitons can  I do to run the loading process succesfully?

    Hi,
    The job is still in progress it seems.
    You could monitor the job that was created in R/3 (by copying the technical name in the monitor, appending "BI" to is as prefix, and searching for this in SM37 in R/3).
    Keep on eye on ST22 as well if this job is taking too long, as you may have gotten a short dump for it already, and this may not have been reported to the monitor yet.
    Regards,
    De Villiers

  • To improve data load performance

    Hi,
    The data is getting loaded into the cube. Here there are no routines in update rules and transfer rules. Direct mapping is done to the infoobjects.
    But there is an ABAP routine written for 0CALDAY in the infopackage . Other than the below code , there is no abap code written anywhere. For 77 lac records it is taking more than 10 hrs to load. Any possible solutions for improving the data load performance.
      DATA: L_IDX LIKE SY-TABIX.
      DATA: ZDATE LIKE SY-DATUM.
      DATA: ZDD(2) TYPE N.
      READ TABLE L_T_RANGE WITH KEY
           FIELDNAME = 'CALDAY'.
      L_IDX = SY-TABIX.
    *+1 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) = '12'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE+4(2) = '01'.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 1.
        ZDATE+6(2) = '01'.
        L_T_RANGE-LOW = ZDATE.
      ENDIF.
    *+3 montn
      ZDATE = SY-DATUM.
      IF ZDATE+4(2) => '10'.
        ZDATE0(4) = ZDATE0(4) + 1.
        ZDATE4(2) = ZDATE4(2) + 3 - 12.
        ZDATE+6(2) = '01'.
      ELSE.
        ZDATE4(2) = ZDATE4(2) + 3.
        ZDATE+6(2) = '01'.
      ENDIF.
      CALL FUNCTION 'FIMA_END_OF_MONTH_DETERMINE'
        EXPORTING
          I_DATE                   = ZDATE
        IMPORTING
          E_DAYS_OF_MONTH          = ZDD.
      ZDATE+6(2) = ZDD.
      L_T_RANGE-HIGH = ZDATE.
      L_T_RANGE-SIGN = 'I'.
      L_T_RANGE-OPTION = 'BT'.
      MODIFY L_T_RANGE INDEX L_IDX.
      P_SUBRC = 0.
    Thanks,
    rani

    i dont think this filter routine is causing the issue...
    please implement performance impovement methods..
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap

  • Design pattern / data loading solution

    Hello all!
    I have been working on a few projects which involve loading data, sometimes remotely, sometimes local, sometimes JSON, sometimes XML. The problem I am having is that due to the speed of development and the changing minds of various clients I am finding my designs are too rigid and I would like them to be more flexable. I have been trying to think of a reusable solution to data loading, and would like some advice as I imagine many of you out there have had the same problem.
    What I would like to do is to create a generic LoadingOperation abstract class, which has member variables of type Parser and Loader, which have parse() and loadData() methods respectivly. The Parser and Loader classed are interfaces and classes that implement these could be XMLParser and JSONParser, LocalLoader and RemoteLoader etc. With something like this i would like to have a new class which extends LoadingOperation for each thing to be loaded, weather thats a local XML file, or remote JSON, or whatever.
    The problem is is that specific Parser implementation cannot return custom data types without breaking the polymorphic behavior of the LoadingOperation class. I have been messing around with generics and declaring subclasses of LoadingOperation like
    class SpecificLoader extends LoadingOperation<CustomDataType>
    and doing similar things with Parser classes, but this seems a bit weird.
    Does anyone have any suggestions on what im doing wrong / could be doing better. I want to be able to react quickly to changing specifications (ignoring the fact that they shouldnt be changing that much!) and have a logical seperation of code etc...
    thanks for any help!
    psi have also asked this question here [http://stackoverflow.com/questions/4329087/design-pattern-data-loading-solution]

    rackham wrote:
    Hello all!
    I have been working on a few projects which involve loading data, sometimes remotely, sometimes local, sometimes JSON, sometimes XML. The problem I am having is that due to the speed of development and the changing minds of various clients I am finding my designs are too rigid and I would like them to be more flexable. I have been trying to think of a reusable solution to data loading, and would like some advice as I imagine many of you out there have had the same problem.
    What I would like to do is to create a generic LoadingOperation abstract class, which has member variables of type Parser and Loader, which have parse() and loadData() methods respectivly. The Parser and Loader classed are interfaces and classes that implement these could be XMLParser and JSONParser, LocalLoader and RemoteLoader etc. With something like this i would like to have a new class which extends LoadingOperation for each thing to be loaded, weather thats a local XML file, or remote JSON, or whatever.
    The problem is is that specific Parser implementation cannot return custom data types without breaking the polymorphic behavior of the LoadingOperation class. I have been messing around with generics and declaring subclasses of LoadingOperation like
    class SpecificLoader extends LoadingOperation<CustomDataType>
    and doing similar things with Parser classes, but this seems a bit weird.
    Does anyone have any suggestions on what im doing wrong / could be doing better. I want to be able to react quickly to changing specifications (ignoring the fact that they shouldnt be changing that much!) and have a logical seperation of code etc...That depends on the specifics.
    The fact that it seems like processes are similar doesn't mean that they are in fact the same. My code editor and Word both seem to be basically the same but I am rather sure that generalizing between the two would be a big mistake.
    And I speak from experience (parsing customer data and attempting to generalize the process.)
    The problem with attempting to generalize is if you generalize functionality that is not in fact the same. And then you end up with conditional logic all over the place to deal with differences dependent on the users. Rather than saving time that actually costs time because the code becomes more fragile.
    Doesn't mean it isn't possible but just rather that you should insure that it is in fact common behavior before implementing anything.

  • Is there any setting in ODS to accelerate the data loads to ODS?

    Someone mentions that there is somewhere in ODS setting that make the data load to this ODS much faster.  Don't know if there is kind of settings there.
    Any idea?
    Thanks

    hi Kevin,
    think you are looking for transaction RSCUSTA2,
    Note 565725 - Optimizing the performance of ODS objects in BW 3.0B
    also check Note 670208 - Package size with delta extraction of ODS data, Note 567747 - Composite note BW 3.x performance: Extraction & loading
    hope this helps.
    565725 - Optimizing the performance of ODS objects in BW 3.0B
    Solution
    To obtain a good load performance for ODS objects, we recommend that you note the following:
    1. Activating data in the ODS object
                   In the Implementation Guide in the BW Customizing, you can implement different settings under Business Information Warehouse -> General BW settings -> Settings for the ODS object that will improve performance when you activate data in the ODS object.
    2. Creating SIDs
                   The creation of SIDs is time-consuming and may be avoided in the following cases:
    a) You should not set the indicator for BEx Reporting if you are only using the ODS object as a data store.Otherwise, SIDs are created for all new characteristic values by setting this indicator.
    b) If you are using line items (for example, document number, time stamp and so on) as characteristics in the ODS object, you should mark these as 'Attribute only' in the characteristics maintenance.
                   SIDs are created at the same time if parallel activation is activated (see above).They are then created using the same number of parallel processes as those set for the activation. However:if you specify a server group or a special server in the Customizing, these specifications only apply to activation and not the creation of SIDs.The creation of SIDs runs on the application server on which the batch job is also running.
    3. DB partitioning on the table for active data (technical name:
                   The process of deleting data from the ODS object may be accelerated by partitioning on the database level.Select the characteristic after which you want deletion to occur as a partitioning criterion.For more details on partitioning database tables, see the database documentation (DBMS CD).Partitioning is supported with the following databases:Oracle, DB2/390, Informix.
    4. Indexing
                   Selection criteria should be used for queries on ODS objects.The existing primary index is used if the key fields are specified.As a result, the characteristic that is accessed more frequently should be left justified.If the key fields are only partially specified in the selection criteria (recognizable in the SQL trace), the query runtime may be optimized by creating additional indexes.You can create these secondary indexes in the ODS object maintenance.
    5. Loading unique data records
                   If you only load unique data records (that is, data records with a one-time key combination) into the ODS object, the load performance will improve if you set the 'Unique data record' indicator in the ODS object maintenance.

  • Data Load -- Best Practices Analytics 9.0.1

    We are currently implementing Essbase I would be interested in feedback concerning data load practices. <BR><BR>We have a front end system which delivers live operational type data in a sql database. Currently, I use Access to run queries against the data to load into Enterprise, but I would like to move to an automated, daily load for Essbase. At this point in Essbase, I have several load rules that I apply to Excel files which were exported from Access (not a good solution). I would assume that a better answer would be a SQL load, but I wonder how others typically go about loading information. What about loading financial data consolidated in another system (Enterprise)?<BR><BR>Thanks for any feedback,<BR><BR>Chris

    Wanted to give an update of my progress today.
    I again began with a clean installation of 9.0.0.  Brought up the CF administrator and completed the installation.  From there, I went directly to installing the 9.0.1 update and the 9.0.1 hotfix.  To my amazement, the cf administrator came up with an issue. But . . .
    I then went into the administrator to install my 'customizations' (i.e. my datasources, my SMTP mail server, my custom tags, etc).  Truly nothing unusual.  Almost sad to say - vanilla.  I then shut down the service as recommended to have some of the changes 'take effect'.  Boom, the cf administrator no longer appears but gives me the blank screen and the same error messages I have listed in my first note.  So again, it must be "something either I turned on/off incorrectly, but don't even know where to look".
    Would this be considered a bug?
    Libby H

  • Data load fails from DB table - No SID found for value 'MT ' of characteris

    Hi,
    I am loading data into BI from an external system (oracle Database).
    This system has different Units like BG, ROL, MT (for Meter). While these units are not maintaned in R3/BW. They are respectively BAG, ROLL, M.
    Now User wants a "z table" to be maintained in BW, which has "Mapping between external system Units and BW units".
    So that data load does not fail. Source system will have its trivial Units, but at the time of loading, BW Units are loaded.
    For example -
    Input Unit (BG) -
    > Loaded Unit in BW (BAG)
    Regards,
    Saurabh T.

    Hello,
    The table T006 (BW Side) will have all the UOM, only thing is to make sure that all the Source System UOM are maintained in it. It also have fields for Source Units and target units as you have mentioned BG from Source will become BAG. See the fields MSEHI, ISOCODE in T006 table
    If you want to convert to other units then you need to implement Unit Conversion.
    Also see
    [How to Report Data in Alternate Units of Measure|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b7b2aa90-0201-0010-a480-a755eeb82b6f]
    Thanks
    Chandran

  • Data Load Speed

    Hi all.
    We are starting the implementation of SAP at the company I work and I am designated to prepare the Data Load of the legacy systems. I have already asked our consultants about the data load speed but they didn´t answer really what I need.
    Does anyone have a statistic of the data load speed using tools like LSMW, CATT, eCATT, etc... per hour?
    I know that the speed depends of what data I´m loading and also the CPU speed but any information is good to me.
    Thank you and best regards.

    hi friedel,
    Again here is the complete details regarding data transfer techniques.
    <b>Call Transaction:</b>
    1.Synchronous Processing
    2.Synchronous and Asynchrounous database updates
    3.Transfer of data for individual transaction each time CALL TRANSACTION statement is executed.
    4.No batch input log gets generated
    5.No automatic error handling.
    <b>Session Method:</b>
    1.Asynchronous Processing
    2.Synchronous database updates.
    3.Transfer of data for multiple transaction
    4.Batch input log gets generated
    5.Automatic error handling
    6.SAP's standard approach
    <b>Direct Input Method:</b>
    1.Best suited for transferring large amount of data
    2.No screens are processed
    3.Database is updated directly using standard function modules.eg.check the program RFBIBL00.
    <b>LSMW.</b>
    1.A code free tool which helps you to transfer data into SAP.
    2.Suited for one time transfer only.
    <b>CALL DIALOG.</b>
    This approach is outdated and you should choose between one of the above techniques..
    Also check the knowledge pool for more reference
    http://help.sap.com
    Cheers,
    Abdul Hakim

  • Data load from semantic partition info-provider

    Hi,
    We are trying to load transaction data from a multiprovider which  is built on "Semantic Partitions" objects.  Now when i try to do it on semantic partitions objects then it is throwing error.  However, the same  works when i do it on individual cubes.
    Here is the log
    [Start validating transformation file]
    Validating transformation file format
    Validating options...
    Validation of options was successful.
    Validating mappings...
    Validation of mappings was successful.
    Validating conversions...
    Validation of the conversion was successful
    Creating the transformation xml file. Please wait...
    Transformation xml file has been saved successfully.
    Begin validate transformation file with data file...
    [Start test transformation file]
    Validate has successfully completed
    ValidateRecords = YES
    Error occurs when loading transaction data from other model
    Validation with data file failed
    I was wondering if anybody has implemented data load package with semantic partition infoprovider. 
    We are on
    BI- 730, SP 7,
    BPC 10.0, SP 13
    Thanks
    prat

    Hello,
    BPC provides its own process chains for loading both transaction and master data from BW InfoProviders.  As far as I know that is the only way to load data from other sources into BPC.
    Best Regards,
    Leila Lappin

  • PAS Data Load  iin Cube Builder Dimensions

    Hi All,
      Two important questions about SSM Implementation , that are impacting our development
      1 - Is it possible to develop a data load to fill dimensions created in Cube Builder ? For example, the user wants to be able to fill dimensions manually in SSM BUT      wants to load some aditional information via data load un the same Cube created via Cube Builder Tool  What I see is that, when we fill the dimension manually in Cube Builder a internal code is created in PAS Database
          INPUT
          L0M1281099797397 '001'
          but I am in doubt about to reprocude the same code via data load in a PAS Procedure
      2 - My customer in his original system maintains a relationship between Iniatitives Is it possibile to do the same in SAP SSM ? I looked for the documentation anda I had not found anyting
          associated with this
      Regards,
         Cristian

    Hi Cristian,
    Jus for clarification: do you want to modify a dimension that was created through Cube Builder? Or do you want to create and maintain a new dimension without going through Cube Builder?
    If you are trying to create a new dimension, how many times would this dimension changed? And would the change be made manually or would the structure be available in a table?
    I usually would suggest to have the dimension structure in a table and use PAS procedures to create/re-create the dimension. You can assign whatever technical name and label you want, and as long as you maintain the same technical name for each member, you should be able to recreate dimensions without losing any data.
    If this is a dimension that you won't change anymore you can also code it directly in PAS with the structure you find in the other dimensions:
    INPUT
    input_member1_technical_name 'input_member1-label',
    input_member2_technical_name 'input_member2-label',
    input_member3_technical_name 'input_member3-label',
    input_member4_technical_name 'input_member4-label'
    OUTPUT
    output_member1_technical_name 'output_member1-label',
    output_member2_technical_name 'output_member2-label',
    RESULT
    result_member_technical_name 'result_member-label'
    output_member1_technical_name = SUM
    input_member1_technical_name 'input_member1-label',
    input_member2_technical_name 'input_member2-label'
    output_member2_technical_name = SUM
    input_member3_technical_name 'input_member3-label',
    input_member4_technical_name 'input_member4-label'
    result_member_technical_name = SUM
    output_member1_technical_name,
    output_member2_technical_name
    Best regards,
    Ricardo Vieira

  • Log on data load through a BW data flow

    Dears,
    I am requesting to all of you who have already implemented this type of functionality. I am trying to find the easiet way, with less complexity, to implement a log through an existing BW data flow.
    I mean data load by an infopackage give some log on right and wrong records within the monitor, how can I utilize this information? is there a specific table which stored each record and their message? Or a program has to be implemented which will publish laoding status in a specific table?
    Thanks for your quick feedback,
    LL

    Hi Ludovic
    The monitor messages are only written if there is some problem in the record processing. You can only find information for those records which have problem or if the processing during the routines encountered some problem.
    What you can do to capture messages is write one transfer routine and amend the monitor messages table rsmonmess for the same.
    Also,please check the tables starting with RSMO*
    regards
    Vishal

  • Master data Load - Strategy

    Hi All,
    I would like to know the better master data strategy.
    When using 3.x master data load strategy I can load directly into the info object without going thru additional DTP process. Is there any specific advantages of doing this, besides that it is 7.X.
    I have been using 2004s from 2005 but most of the implementations we used 3.x methodology for master data load and  DTP for transaction load. i would like to know whether SAP recommands new methodology using DTP for master data loads? If I load my master data using 3.x I can avoid one extra step, but will it be discontinued in the future? and have to use DTP even for this?
    Please advice if you know what is the best way forward strategically an dtechnically?
    Thanks,
    Alex.

    Alex,
    Please read my answer...
    The new data flow designed by SAP is using the DTP, even for Master Data... Right now you can use the "3.x" style which they maintain for backward-compatibility but down the road, eventually, it will be dropped, so looking ahead, technically and strategically, the right way to go is by using DTP...
    You can go here http://help.sap.com/saphelp_nw70/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm and check under Data Warehousing, Data Distribution, Data Transfer Process...
    You could also open an OSS note to SAP and ask them directly.
    Thanks,
    Luis

  • BI Statistics Master Data Load

    Hi,
    We are implementing BI Statistics and BI Administration Cockpit.  One of the OSS Note 934848 mentions that I have to install/activate all the datasources that are under the Application Component BW_TECH --> TCT.
    My question is do I have to consider loading the master data for all those datasources/infoobjects ?  There are nearly 80 - 100 datasources.
    The note 934848 mentions to install the Business Content Process Chains.  And I see that those process chains are hardly loading 10 - 12 master data loads.  Is it OK if I just make use of those business content process chains to load master data ?
    Initially I have created DTP's and Transformations to load master data for approx 50 objects.  Later I came to know that BI statistics Cubes etc follow the 3.x format and I should use those as it is delivered.  So, can't I still make use of the 7.0 converted datasourse objects with DTP to load the data ?
    -or- do I have to delete all the transformations and DTP's and migrate all those DataSources back to 3.x ?
    Thanks,

    Hi Sesh,
    1. Yes you need to activate all those Datasources. Its better to activate all of them as in BI 7.0 TCT infoproviders are used as basis for Workload monitor ST03 also in addition to Admin cockpit.
    2. Its sufficient to just load the master data using the standard process chains (Init chains once followed by delta chains.
    3. Its recommended to use the TCT content as 3.x only.
    4. No need to migrate the 3.x content, just use as it is.
    for more information you can refer to this <a href="https://forums.sdn.sap.com/thread.jspa?threadID=279998&tstart=0">thread</a> and related posts by <a href="https://forums.sdn.sap.com/profile.jspa?userID=2035002&start=0">Rajani Saralaya K</a>
    Assign points if helpful,
    Regards, Uday

  • EC-CS Data Loading Strategy

    Hello. We are currently in the process of implementing the Enterprise Consolidation Transaction Data InfoCube (0ECCS_C01).
    We are attempting to develop a data loading strategy for the InfoCube. Since this InfoCube does not have a delta process, reloading the entire cube on a daily basis is not feasable due to the length of time. We would like to set it up where it would load the current month for actuals plus into the future for forecast dollars.
    Has anyone established a data loading process for their consolidated accounting InfoCube that works well and keeps data loading time to a minimum?
    Best regards,
    Lynn

    Hi,
    You could prepare packages:
    one which you upload all data from previous years/months
    second with OLAP variables (for example 0DAT) which you upload data only from present day/month/year - depends which variable you select (add this package to the chain).
    When the second package will crash, you have to repeat procedure.
    Regards,
    Dominik

  • Reg "Allow Bulk Data Load"

    Hi all,
    GoodMorning,.
    what exactly does the option of "Allow Bulk Data Load" option on Company Profile page do, it is clear in doc. that it allows crm on demand consultants to load bulk data. But i am not clear on how they load etc etc, do they use anyother tools other than that admin. uses for data uploading.
    any real time implementation example using this option would be appreciated.
    Regards,
    Sreekanth.

    The Bulk Data Load utility is a utility similar to the Import Utility that On Demand Professional Services can use for import. The Bulk Data Load utility is accessed from a separate URL and once a company has allowed bulk data load then we would be able to use the Bulk Data Load Utility for importing their data.
    The Bulk Data Load uses similar method to the Import Utility for importing data with the difference being that the number of records per import is higher and you can queue multiple import jobs.

Maybe you are looking for

  • Qty Blocking scenario related to GR Based Invoice Verification

    Friends, I'm facing issues related to automatic deletion of Qty blocking reason during LIV. Scenario is like below: GR1 Qty = 5 IR1 Qty = 7 This invoice is posted by system but blocked for payment due to Qty variance. Expectation is that store person

  • UPDATE statement in Receiver JDBC adapter

    Hi all, I would like to use UPDATE statement in my receiver JDBC adapter and would like to know how this UPDATE statement works in following case. 1) If i have 10 records to be updated in database, whether Commit happens at the end of all 10 records

  • Composite a/v cable w/ charger

    Just bought the Apple composite a/v cable for my I Pod touch. Can you leave the charger plugged in all the time without harming the battery?

  • Load balancing - what happens if a server goes down?

    Hi, Just an academic question. Assume I have 2 DA servers behind a BIG-IP load balancer, named DA1 and DA2. If I enable Load Balancing (BIG-IP in my case), what happens to the clients already Connected to DA1 when IP-HTTPS is used? For example: Clien

  • Kate always open files in a new window after kde upgrade

    kate always open files in a new window, instead of the existing window as a new document after kde upgrade to 3.5. someone else have the same experience and have a solution for that ? cheers, notz