Optimize the data load process into BPC Cubes on BW

Hello Gurus,
We like to know how to optimize the data load process and our scenario for this is that we have ECC Classic Ledger,  and we are looking for the best way to load data into the BW Infocubes from an ECC source.
To complement the question above, from what tables the data must be extracted and then parsed to BW so the consolidation it´s done ?  also, is there any other module that has to be considered from other modules like FI or EC-CS for this?
Best Regards,
Rodrigo

Hi Rodrigo,
Have you looked at the BW Business Content extractors available for the classic GL? If not, I suggest you take a look. BW business content provides all the business logic you will normally need to get data out of ECC and into BW for pretty much every ECC application component in existence: [http://help.sap.com/saphelp_nw70/helpdata/en/17/cdfb637ca5436fa07f1fdc0123aaf8/frameset.htm]
Ethan

Similar Messages

  • How we can automate the data loading from BI-BPC

    Dear  Guru's
    Thanks for watching this thread,my question is
                  How we can load the data from BI7.0 to BPC.My environment is SAP-BI 7.0 and BPC is 7.5 MS version and 2008SQL.
    How we can automate the data loading from  BI- BPC Ms version.Is manual flat file load is mandatory in ms version.
    Thanks in Advance,
    Srinivasan.

    Here are some options
    1) Use standars packages and schedule them :
        A) Openhub masterdata file into a flat file/ BPC App server  and Schedule the package - Import Master Data from a Data File and  other relevent packages.
    2 ) Using Custom Tasks in Custom Packages ( SSIS)
    Procedure
    From the Microsoft SQL Server Business Intelligence Developer Studio, open the Microsoft SSIS folder.
    Create a new package, or select an existing package to modify.
    Choose  Task  Register Custom Task .
    In the Task Location field, browse for the target .dll file.
    Note
    By default, the .dll files are stored in BPC/Websrvr/bin.
    End of the note.
    Enter a task description, select an appropriate icon, then click OK.
    Drag the icon to the designer window. Enter data as required.
    Save the package.

  • How to automate the data load process using data load file & task Scheduler

    Hi,
    I am doing Automated Process to load the data in Hyperion Planning application with the help of data_Load.bat file & Task Scheduler.
    I have created Data_Load.bat file but rest of the process i am unable complete.
    So could you help me , how to automate the data load process using Data_load.bat file & task Scheduler or what are the rest of the file is require to achieve this.
    Thanks

    To follow up on your question are you using the maxl scripts for the dataload?
    If so I have seen and issue within the batch (ex: load_data.bat) that if you do not have the full maxl script path with a batch when running it through event task scheduler the task will work but the log and/ or error file will not be created. Meaning the batch claims it ran from the task scheduler although it didn't do what you needed it to.
    If you are using maxl use this as the batch
    "essmsh C:\data\DataLoad.mxl" Or you can also use the full path for the maxl either way works. The only reason I would think that the maxl may then not work is if you do not have the batch updated to call on all the maxl PATH changes or if you need to update your environment variables to correct the essmsh command to work in a command prompt.

  • Automate the data load process using task scheduler

    Hi,
    I am doing Automated Process to load the data in Hyperion Planning application with the help of Task Scheduler.
    I have created Data_Load.bat file but rest of the process i am unable complete.
    So could you help me , how to automate the data load process using Data_load.bat file & task Scheduler.
    Thanks

    Thanks for your help.
    I have done data loading using Data_load.batch file . the .bat file call to .msh file using this i have followed your steps & automatic data lload is possible.
    If you need this then please let me know.
    Thanks again.
    Edited by: 949936 on Oct 26, 2012 5:46 AM

  • How to load the data from informatica into bw & how to report the data

    Hi friends,
    how to load the data from informatica into bw & how to report the data
    using cognos.(i.e how to access the data in sap bw using cognos 8 BI suite).
    Thanks,
    madhu.

    Inorder to report BW data into Cognos you can extract data from using Open Hub to the DB table from which Cognos reads.
    For BW informatic integration refer following docs:
    http://www.aman.co.il/aman/pfd/DataInteg_BR.q103cd.pdf.pdf
    http://h71028.www7.hp.com/enterprise/cache/3889-0-0-225-121.html
    http://devnet.informatica.com/learning/ePresentations.asp
    http://72.14.203.104/search?q=cache:C741L86Q19oJ:devnet.informatica.com/showcase/resources/Essbase_DataSheet.pdfinformaticapowerconnect(BI)&hl=en&gl=in&ct=clnk&cd=3
    http://www.informatica.com/customers/utilities_energy/fpl_group.htm
    http://www.informatica.com/solutions/resource_center/technote_sapbw_65241004.pdf#search=%22Informatica%20to%20Bw%22

  • How to stop the data loads through process chains

    hi,
    I want to stop all the data loads to BI through Process chains where load happens periodic.
    kindly suggest how can I proceed.

    Hi,
    Goto RSPC find your PC and double click on START then change the timings, i.e. give starting date is 01.01.9999 like that Save and ACtivate the PC, it won't start till 01.01.9999.
    Thanks
    Reddy

  • How to delete the data loaded into MySQL target table using Scripts

    Hi Experts
    I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
    My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
    But in the script i have written the code as
    sql('database','delete from <tablename>');
    but as it is an SQL Query execution it is rising exception for the query.
    How can i delete the data loaded into MySQL Target table using scripts.
    Please guide me for this error
    Thanks in Advance
    PrasannaKumar

    Hi Dirk Venken
    I got the Solution, the mistake i did was the query is not correct regarding MySQL.
    sql('MySQL', 'truncate world.customer_salesfact_details')
    error query
    sql('MySQL', 'delete table world.customer_salesfact_details')
    Thanks for your concern
    PrasannaKumar

  • Steps to load Hierarchies into BPC

    Hi All
    I am new to SAP BPC.I have the following doubts on BPC.
    1.Can anyone let me know how to load hierarchies into BPC. If you have any links which explain this procedure its appreciated you pass on it to me.
    2. Difference between Planning and Forecasting.
    Thanks and Appreciate your quick response.
    Thanks & Regards
    ark

    Hi Ark-
    There is an excellent paper on how to load master data (including hierarchies) into BPC located at:
    [https://wiki.sdn.sap.com/wiki/display/BPX/Enterprise%20Performance%20Management%20%28EPM%29%20How-to%20Guides|https://wiki.sdn.sap.com/wiki/display/BPX/Enterprise%20Performance%20Management%20%28EPM%29%20How-to%20Guides]
    The article is titled: "How Tou2026 Automate BPC 7.0, version for the NetWeaver Platform Master Data Loads from BW to BPC".
    Planning and Forecasting are business processes.  Planning implies a process that is invoked to predict future performance.  More typically businesses use the terms Budgeting and Forecasting to describe how far out in time a prediction is made.  Budgeting is a process that delivers at least a full future year of performance expectations.  Forecasting is very similar to Budgeting forward looking view; however forecasting is typically focused on the current year's performance.  Forecasting will typically combine some actual values with a projection of the balance of the current year's anticipated results.  Each customer may call these activities by their own internal naming conventions.
    Regards,
    Sheldon

  • Data load from DSO to cube fails

    Hi Gurus,
    The data loads have failed last night and when I try to dig inside the process chains I find out that the cube which should be loaded from DSO is not getting loaded.
    The DSO has been loaded without errors from ECC.
    Error Message say u201DThe unit/currency u2018source currency 0currencyu2019 with the value u2018spaceu2019 is assigned to the keyfigure u2018source keyfigure.ZARAMTu2019 u201D
    I looked in the  PSA has about 50, 000 records .
    and all data packages have green light and all amounts have 0currency assigned
    I went in to the DTP and looked at the error stack it has nothing in it then I  changed the error handling option from u2018 no update,no reportingu2019 to u2018valid records update, no reporting (resquest red)u2019 and executed then the error stactk showed 101 records
    The ZARAMT filed has 0currency blank for all these records
    I tried to assign USD to them and  the changes were saved and I tried to execute again but then the message says that the request ID should be repaired or deleted before the execution. I tried to repair it says can not be repaired so I deleted it and executed . it fails and the error stack still show 101 records.when I look at the records the changes made do not exist anymore.
    If I delete the request ID before changing and try to save the changes then they donu2019t get saved.
    What should I do to resolve the issue.
    thanks
    Prasad

    Hi Prasad....
    Error Stack is request specific.....Once you delete the request from the target the data in error stach will also get deleted....
    Actually....in this case what you are suppose to do is :
    1) Change the error handling option to u2018valid records update, no reporting (resquest red)u2019 (As you have already done....) and execute the DTP......all the erroneous records will get sccumulated in the error stack...
    2) Then correct the erroneos records in the Error Stack..
    3) Then in the DTP----in the "Update tab"....you will get the option "Error DTP".......if it is already not created you will get the option as "Create Error DT".............click there and execute the Error DTP..........The Error DTP will fetch the records from the Error Stack and will create a new request in the target....
    4) Then manually change the status of the original request to Green....
    But did you check why the value of this field is blank....if these records again come as apart of delta or full then your load will again fail.....chcek in the source system and fix it...for permanent solution..
    Regards,
    Debjani......

  • Is there any setting in ODS to accelerate the data loads to ODS?

    Someone mentions that there is somewhere in ODS setting that make the data load to this ODS much faster.  Don't know if there is kind of settings there.
    Any idea?
    Thanks

    hi Kevin,
    think you are looking for transaction RSCUSTA2,
    Note 565725 - Optimizing the performance of ODS objects in BW 3.0B
    also check Note 670208 - Package size with delta extraction of ODS data, Note 567747 - Composite note BW 3.x performance: Extraction & loading
    hope this helps.
    565725 - Optimizing the performance of ODS objects in BW 3.0B
    Solution
    To obtain a good load performance for ODS objects, we recommend that you note the following:
    1. Activating data in the ODS object
                   In the Implementation Guide in the BW Customizing, you can implement different settings under Business Information Warehouse -> General BW settings -> Settings for the ODS object that will improve performance when you activate data in the ODS object.
    2. Creating SIDs
                   The creation of SIDs is time-consuming and may be avoided in the following cases:
    a) You should not set the indicator for BEx Reporting if you are only using the ODS object as a data store.Otherwise, SIDs are created for all new characteristic values by setting this indicator.
    b) If you are using line items (for example, document number, time stamp and so on) as characteristics in the ODS object, you should mark these as 'Attribute only' in the characteristics maintenance.
                   SIDs are created at the same time if parallel activation is activated (see above).They are then created using the same number of parallel processes as those set for the activation. However:if you specify a server group or a special server in the Customizing, these specifications only apply to activation and not the creation of SIDs.The creation of SIDs runs on the application server on which the batch job is also running.
    3. DB partitioning on the table for active data (technical name:
                   The process of deleting data from the ODS object may be accelerated by partitioning on the database level.Select the characteristic after which you want deletion to occur as a partitioning criterion.For more details on partitioning database tables, see the database documentation (DBMS CD).Partitioning is supported with the following databases:Oracle, DB2/390, Informix.
    4. Indexing
                   Selection criteria should be used for queries on ODS objects.The existing primary index is used if the key fields are specified.As a result, the characteristic that is accessed more frequently should be left justified.If the key fields are only partially specified in the selection criteria (recognizable in the SQL trace), the query runtime may be optimized by creating additional indexes.You can create these secondary indexes in the ODS object maintenance.
    5. Loading unique data records
                   If you only load unique data records (that is, data records with a one-time key combination) into the ODS object, the load performance will improve if you set the 'Unique data record' indicator in the ODS object maintenance.

  • Data load process for FI module

    Dear all,
    We are using BI7.00 and in one of our FI data source 0EC_PCA_1 we had data load failure, the cause for the failure was analysed and we did the following
    1) deleted the data from cube and the PSA
    2) reloaded (full load) data - without disturbing the init.
    This solved our problem. Now when the data reconciliation is done we find that there are doubled entries for some of the G/L codes.
    I have a doubt here.
    Since there is no setup table for FI transactions (correct me if i am wrong), the full load had taken the data which was also present in the delta queue and subsequently the delta load had also loaded the same data
    (some g/l which was available as delta).
    Kindly provide the funtioning of FI data loads. Should we go for a Down time and how FI data loads works without setup tables.
    Can experts provided valuable solution for addressing this problem. Can anyone provide step by step process that has to be adopted to solve this problem permenantly.
    Regards,
    M.M

    Hi Magesh,
    The FI datasources do not involve Setup tables while performing full loads and they do not involve outbound queue during delta loads.
    Full load happens directly from your datasource view to BI and delta is captured in the delta queue.
    Yes you are right in saying that when you did a full load some of the values were pulled that were also present in the delta queue. Hence you have double loads.
    You need to completely reinitialise as the full load process is disturbed. Taking a down time depends on how frequent the transactions are happening.
    You need to.
    1. Completely delete the data in BW including the initialisation.
    2. Take a down time if necessary.
    3. Reintialise the whole datasource from scratch.
    Regards,
    Pramod

  • Essbase data load process never terminating

    Hi
    We are using Essbase 11.1.2.1.
    We try to load a data file into a BSO Essbase application (essbase export format, so no dataload). We are using "execute in background" option.
    If the file contents no unknown member : it is correctly loaded.
    If the file contents an unknown member : the data load is never terminated : we have killed it after more than 15 hours, and the request is still in "terminating" so we have to kill the essbase process on the server. No error message or file.
    Thanks in advance for your help
    Fanny

    Hi
    I try to start this discussion again, because we still have the same problem, and no idea of why (no idea from the Oracle support too) and my client is waiting for an explanation :-/
    So, some more details :
    Context :
    I have an export (essbase format) from a database.
    If I import this file in a database with all the members used in the file => no problem. The dataload takes around 10 seconds.
    If I import this file in a database with all the members used in the file, excepted 1 => problem. The data load never ends and I have to kill the Essbase application on the Essbase server.
    I have done another test :
    I have imported the file in a database with all the members used in the file (so no problem).
    I have exported level-0 data in columns.
    I have imported this new file in the database with all the members used in the file, using a DLR => no problem. The data load takes around 10 seconds, with a dataload.err generated.
    So, John, you are right, the client shoud really use a column format export. But I need to explain to him why he cannot (not should not) use an export format.
    If using an export format is possible : I have to identify my problem and make the right modification to solve the problem.
    If it not possible : this is a bug, and it is not my responsability anymore and I can close the project!
    Thanks in advance for your great ideas!
    Fanny

  • How to identify the status of the data load

    Hi All,
    Here is my requiremenet,
    I have a process called etl_pf2 which loads the data into staging tables. Now I have another process that does a partition exchange and moves the data from staging tables to online tables.
    In one procedure, I need to verify whether any data load into the staging tables is happings. If yes then partiontion should not occur, if no then movement should happen.
    Any idea on any parameter which can be used in the procedure to check the status of the data load in staging tables.
    Please help me.

    Thanx for reply
    But i thhink that the problem is with NQSServer which crashes but not disappears from the processes list.
    I tried to search obiee+USER_MEM_ARGS and nqsserver+USER_MEM_ARGS but found only this thread.
    How can JVM settings help to nqsserver.exe to work properly?

  • You are not authorized to display the data transfer process

    Dear Experts,
                          Good Morning!
                          I want to that I am getting one Message no. RSM906:"You are not authorized to display the data transfer process
    ".What does it mean .I want to load the data in the 0SD_C03 cube ,can able to load upto PSA but unable to create the DTP.IS this because of this messege of authorization or some other reason ,Request  you to please suggest me for this issue ,thanks in advance .

    Hi,
    You may not have authorisation for S&D cubes, so i feel it is an authorisation issue, kindly take one of your colleagues id to check this, if it is working for them, then i think you have to add those to roles to your ID..
    Still there is a option to override this, but you have debug it once, and change the authorisation code(sy-subrc) value, and please dont try this if you are not familiar at ABAP, and this is not adviceable if it is an authorisation issue as there will be logs recorded for all the things you are doing and oreover each time when u click on DTP you have debug it to change the authorisation
    code..
    Note: If it is QA or Dev system you can try debugging it, but if it is production system then i think u have to use an alternate ID...
    Regards,
    Nanda.S

  • Modify the Data Load wizard to remove the check for unique columns

    Hi,
    I'm trying to make a Data Loading Wizard into my application, which basically works nice except for one thing:
    I want to remove the check for unique columns, and insert ALL data which the users wants to upload.
    (Like the Data load utility under development).
    The reason is, that the data I want to upload is an export from my bank statements (in XLS / CSV format)
    And these exports don't have any primary key.
    Using the Apex wizard created Data Upload pages, I will loose data in case I had two identical payments in a day.
    Is there a way to do this, while keeping the pages created by the wizard to make Data Loading pages?
    apex 4.2.1

    I would suggest to load the data into a view and process the records into the real table(s) with instead-of-triggers. When you add a "1=2" where clause to the view, there never is any data and everything will be loaded.
    Otherwise you have to have a "real" PK value, like a timestamp or a statement_nr+line_nr.

Maybe you are looking for

  • Why can't I import a photo?

    I have created a new project. I click on Media, then Photos, then select Library. When I scroll down to find the photos I want to use, iMovie unexpectedly quits. I have replicated this several times with the same results. Powerbook G4   Mac OS X (10.

  • Interactive pdf first few pages show okay then they show smaller in size.  Any advice?

    Have tried everything.  Viewed on a mac, on a pc, always the same. Page 1,2 and 3 are fine, filling screen.   But from page 4 onwards, page shows much smaller. Really need some urgent help please, on a deadline Any advice appreciated

  • Diferences with assets amounts after migration

    Hi Gurus!! We need your knowledge!! We found some diferences between asset balances (transaction S_ALR_87011964) and Financial Statement (transaction S_PL0_86000028) amounts after migrate to ECC6. We have read many notes and we havent found any solut

  • How to handle DDL in remote database

    Hi our db running on 10g. How to handle DDL's on the remote database? My requirement is as a object migration, how to migrate object/s from database A to database B through user. Say i have table call my_table in database A I want to create/replace t

  • "fonts" folder

    "fonts" folder on 2nd disk of cs6 prod prem what do i need to do with it? no install option