BCS Load Data Stream Performance

Data Stream upload performance is not good. It takes about 50 minutes to upload the BCS cube. 0FIGL_O02 is the source data base.
Have you had similar problems?  Do you have any benchmark data on # of records per hour?
Thanks
Tim

Tim,
This issue can be controlled by restricting the characteristic values. For expample consolidation units, GL account etc.
Thanks
RJ

Similar Messages

  • BCS - Schedule data stream to execute daily

    Hi everyone, 
    I have created 2 data streams to sync 2 master data loads into BCS from our BW system. 
    Currently, I am manually executing the data streams by going through the menu within t-code "UCWB", and right clicking on each one to select execute.
    Does anyone know of a way I could add these manual executes into a schedule which runs daily?
    Thank you for your help.
    Ryan.

    There may be a way to schedule the load-from-datasteam process for master data, but I am not aware of any.
    However, there are the programs UGMDSYNC for master data, and UGMDSY20 for master data hierarchies. It may be possible to execute these with the parameters needed via the schedule manager. If this works, it is suggested that tha master data always be scheduled prior to the hierarchies.

  • TSV_TNEW_PAGE_ALLOC_FAILED - BCS load from data stream task

    Hi experts,
    We had a short dump when executing BCS Load from Data Stream task. The message is: TSV_TNEW_PAGE_ALLOC_FAILED.
    No storage space available for extending an internal table.
    What happened? How we can solve this error?
    Thanks
    Marilia

    Hi,
    Most likely, the remedy for your problem is the same as in my answer to your another question:
    Raise Exception when execute UCMON

  • Can we use 0INFOPROV as a selection in Load from Data Stream

    Hi,
    We have implemented BW-SEM BPS and BCS (SEM-BW - 602 and BI 7 ) in our company.
    We have two BPS cubes for Cost Center and Revenue Planning and we have Actuals Data staging cube, we use 0SEM_BCS_10 to load actuals.
    We created a MultiProvider on BPS cubes and Staging cube as a Source Data Basis for BCS.
    Issue:
    When loading plan data or Actuals data into BCS (0BCS_C11) cube using Load from Data Stream method, we have performance issue, We automated load process in a Process Chain. Some times it take about 20 hrs for only Plan data load for 3 group currencies and then elimination tasks.
    What I noticed is, (for example/) when loading Plan data, system is also reading Actuals Cube which is not required, there is no selection available in Mapping or selection tab where I can restrict data load from a particular cube.
    I tried to add 0INFOPROV into databasis but then it doen't show up as selection option in the data collection tasks.
    Is there a way where I can restrict data load into BCS using this load option and able to restrict on cube I will be reading data from ?
    I know that there is a filter Badi available, but not sure how it works.
    Thanks !!
    Naveen Rao Kattela

    Thanks Eugene,
    We do have other characteristics like Value Type (10 = Actual and 20 = Plan) and Version (100 = USD Actual and 200 = USD Plan), but when I am loading data into BCS using Load from Data Stream method, the request goes to all the underlying cubes, which in my case are Planning cubes and Actual Cube, but I don't want request to goto Actual Cube when I am running only Plan load. I think its causing some performance issue.
    For this reason I am thinking if I can use 0INFOPROV as we use in Bex queries to filter the InfoProvider so that the data load performance will improve. 
    I was able to to bring in 0INFOPROV into DataBasis by adding 0INFOPROV  in the characteristics folder used by the Data Basis.
    I am able to see this InfoObject Data Stream Fileds tab. I checked marked it to use it in the selection and regenerated the databasis.
    I was expecting that now this field would be available for selection in data collection method, but its not.
    So If its confirmed that there is no way we can use 0INFOPROV as a selection then I would suggest my client for a redesign of the DataBasis it self.
    Thanks,
    Naveen Rao Kattela

  • Need help in triggering the Data stream load  using process chain

    Hi Guru's
    is it possible to trigger a data stream load using process chain?
    Any help is highly appreciated.
    Thanks
    Indiran

    Hi Indiran and welcome aboard!
    Don't think this is possible. SAP BW & SAP SEM-BCS are rather independent systems. Though, BCS lives on top of BI-BW stack, it even may have master data different from those in BW.
    Process chains, AFAIK, is completely the BW's feature. Certainly, you may use PCc on BW side, loading ODS/DSO and cubes involved in BCS data model.
    The main con here is the lost transparency -- you don't control everything from the consolidation monitor.
    The pro side is also rather obvious for me. Since, very often there is a huge difference between data quality at the data source and in the BCS totals cube, I need to make a lot of data transformation. Not only some data calculations or cleaning, but also transformation of data model: key figure model -> account model. It's much more easier to do in BW, for me.
    I even call the ODS/cubes/routines involved in such transformation as intermediate layer, the layer between data source and SEM-BCS.
    And this layer lives rather independently from BCS.
    Hope this helps.

  • SEM-BCS:Data Stream Upload

    Hi! All.
    I am facing an issue in Data Stream upload.....The Target field 0Company is 6 char. and the source field 0Company code is 4 char. in length....the system gives an error ...the value Target field exceeds source field..use info object with greater length.....however, upon mapping Company to another infoobject...with 6 char..used instead of 0comp-Code...the system returns yet
    another error that its not coming from source system or source system can't be determined i.e. the new info object.......
    I was thinking changing target field length to four...any work around this issue...If the target field is changed i.e. 0company what implications will it have or just changing the field in data model do the trick??
    Thanks for your input....
    Victor

    Hello Viktor,
    If i understood well your problem i faced the same thing in the past.
    When costumizing the load from data stream include the lenght to 4 characters or try to include an offset of 2.
    Hope is helps.
    If yes award points.
    Best regards,
    João Arvanas

  • Error while loading Reported Financial Data from Data Stream

    Hi Guys,
    I'm facing the following error while loading Reported Financial Data from Data Stream:
    Message no. UCD1003: Item "Blank" is not defined in Cons Chart of Accts 01
    The message appears in Target Data.  Item is not filled in almost 50% of the target data records and the error message appears.
    Upon deeper analysis I found that Some Items are defined with Dr./Cr. sign of + and with no breakdown.  When these items appear as negative (Cr.) in the Source Data, they are not properly loaded to the target data.  Item is not filled up, hence causing the error.
    For Example: Item "114190 - Prepayments" is defined with + Debit/Credit Sign.  When it is posted as negative / Credit in the source data, it is not properly written to the target.
    Should I need to define any breakdown category for these items?  I think there's something wrong with the Item definitions OR I'm missing something....
    I would highly appreciate your quick assistance in this.
    Kind regards,
    Amir

    Found the answer with OSS Note: 642591..... 
    Thanks

  • Flat Data Load in Spend Performance Management

    Dear All,
    We have a new Project, wherein Data from Various Flat Files (Flat Files from Multiple Non SAP Source Systems), needs to be loaded in SPM. We are new to SPM, any step by step documentation for loading Flat Files into SPM using Data Management Tool, will be really helpful.
    Regards
    Pankaj

    Hi Pankaj,
    These are the basic steps
    1) Determine table and fields you going to load.
    2) create file
    3) upload file to folder
    4) move through steps in user interface.
    This topic is covered in the Master Guide section 3.1Data Management. You can download all of our docs here:
    SAP BusinessObjects Spend Performance Management 3.0 – SAP Help Portal Page
    These notes also have good details:
    1796459 - Sequence for loading data in SPM
    1891572 - SPM Data Model and Tables
    1775340 - The key steps for loading supplier diversity file
    1914255 - How to enable delta for SPM data load     
    and of course these also for :
    1239883 - Extractor Starter Kit for Spend Performance Management(SPM)
    1679583 - Enhanced PO Extractor for Spend Performance Management
    1358507 - SSA extractors: Contracts and Sch Agr Performance
    * For more information about details of the delivery process
    and environment, see Data Management in the see SAP Library for SAP Library for
    SAP Spend Analytics.
    * For more information about the data model, see the Analysis
    Scenario SAP BusinessObjects Spend Performance Management – Data Model and Data
    Flow in SAP NetWeaver BW in SAP Library
    at
    SAP BusinessObjects Spend Performance Management 3.0 – SAP Help Portal Page
    Kind Regards,
    John Harris
    Senior Support Engineer, SAP Active Global Support,
    SAP America, Inc., 1001 Summit Blvd, #2100, Atlanta, GA 30319, USA

  • Performance issue loading data out of AS400

    Hi,
    For loading data out of AS400, I have created a view containing a join between three AS400 tables connected with a database link (And some more changes in the file tnsnames.ora and the listener. Hell of a job with Oracle, but it works finally)
    When I use the tool Toad, the results of this query will be shown in about 20 seconds.
    When I use this view in OWB to load this data into a target table, then the load takes about 15 MINUTES!
    Why is this so slow?
    Do I have to configure something in OWB to make this load faster?
    Other loads when I'm using views (to Oracle tables) to load data are running fast.
    It seems that Oracle does internally more dan just running the view.
    Who knows?
    Regards,
    Maurice

    Maurice,
    OWB generates optimized code based on whether sources are local or remote. With remote sources, Warehouse Builder will generate code that uses inline views in order to minimize network traffic.
    In your case, you confuse the generation by creating a view that does some remote/local joins telling OWB that the object is local (which is only partly true).
    Perhaps what you could do is create one-to-one views and leave it up to OWB to join the objects. One additional advantage you gain with this approach is that you can keep track of the impact analysis based on your source tables rather than views that are based on the tables with flat text queries.
    Mark.

  • Loading data one year at a time

    Hi,
    We have a situation where we need to load data one year at a time. I saw this done a few years ago but do not remember the details.
    What I am thinking is that we could initially run a full load with the following parameters:
    $$ANALYSIS_START: 1/1/2006
    $$ANALYSIS_START_WID: 1/1/2006
    $$INITIAL_EXTRACT_DATE: 1/1/2006
    $$ANALYSIS_END_WID: 1/1/2006
    And this should give us one year. What I am not sure about is how to load each subsequent year???
    Regards

    Is the issue a performance issue (ETLs running for too long)? The problem is that if you do Year by year..and you want to do a "incremental" load for each year, that would be even more of a load..since you are not allowing for BULK load (where the tables get truncated). Either you can truncate and do BULK or incrmental..which may be an even heavier load. I think you are assuming that this approach will somehow help you from a hardware limitation standpoint..do you know for sure that it will?
    If you really do want to do it, as I mentioned, you can edit the INITIAL and END parameters. It would help if you clarify the Hardware limitation...I think there are better ways to handle this than to do what you are doing.

  • Can we load data in chunks using data pump ?

    We are loading data using data pump. So I want to clear my understanding.
    Please correct me if I am wrong on my understandings -
    ODI will fetch all data from source (whether it is INIT or CDC ) in one go and unload into staging area.
    If it is true, will performance hamper in case very huge data (50 million records at source) at source as ODI tries to load entire data in one go. I believe it will give better performance if we load in chunks using data pump.
    Please confirm and correct.
    Also I would like to know how can we configure chunk load using data-pump.
    Thanks in Advance.
    Regards,
    Dinesh.

    You may consider usingLKM Oracle to Oracle (datapump)
    http://docs.oracle.com/cd/E28280_01/integrate.1111/e12644/oracle_db.htm#r15c1-t2
    In 11g ODI reads from source and write to target in parallel. This is the case where you specify select query in source command and insert/update query in the target command. At source side Odi reads records from source and add them to a data queue. At target side a parallel thread reads data from the data queue and writes to the target. So the overall performance would be the slower of the read or write process.
    Thanks,

  • To load data from a cube in SCM(APO) system to a cube in BI system.

    Experts,
         Please let me know whether it is possible to load data from a cube in SCM(APO) system to a cube in BI system.If so explain the steps to perform.
    Thanks,
    Meera

    Hi,
    Think in this way,
    To load the data fro any source we need datasource Ok. You can genare Export data source for Cube in APO,  then use that datasource for BW extraction, try like this. I think it will work, in my case I'm directly loading data from APO to BW using the DataSource sthat are genaraed on Planning Area.
    Why you need to take data from APO cube?. Is there any condition for that?. If it is not mandatory, you can use the dame datasource and load the data to BW, if they have any conditions while loading the data from APO to APO cube, they you try check wherther it is possible in BW or not. If possible then you use DataSource and do the same calculation in BW directly.
    Thanks
    Reddy

  • Loading Data in  to a cube

    Hi,
      I am Loading data from one cube to another cube .The data present in large cube is Larger. So i am loading by Each month.Whether i need to Delete the Indexes after each Load and i want to know what is impact of that.
    Regards
    Arunkumar

    SAP best practice while loading data to IC is to drop the index before the load and create the index after completion of the load.
    Yes you need to delete and create the index for each load.
    If you do not delete the index while loading data to infocube, records that are loaded newly into the table and they need to search for the latest index to which they need to fit in.
    If you drop the index and create the index it will reduce the time for search of index which is latest available for the recent loaded data.
    by doing this loading performance will improve.
    Edited by: prashanthk on Nov 26, 2010 11:25 AM

  • Unable to load data from DSO to Cube

    Good morning all,
    I was trying to load data from DSO to Cube for validation. Before loading the new data, I deleted all from DSO and Cube. They contain no request at all. Cube has "Delta Update". First DSO was loaded 138,300 records successfully. Then Activated the DSO. The last when I clicked Exectue (DSO --> Cube), it loaded 0 record. I was able to load the data yesterday. What might be the reasons for this situation?
    Thank you so much!

    Hi BI User,
    For loading delta upload into the data target, there should be an initialization request in the data target with same selection criteria.
    So..first do the initialization and perform delta upload into the cube.
    Regards,
    Subhashini.

  • How to load data from one Infocube to another request by request using DTP

    Hi All,
    I have a scenario where we are maintaining backup Infocube B for Infocube A.  User loads data from a flat file many times a day in different requests.  We need to maintain backup of the Infocube A on weekly basis.  i.e data to be refreshed with delta update from cube A to cube B.  There are some situations where user deletes some of the requests in Infocube A randomly after use.  Will this change effect in back up cube B when performed data refresh from cube A to cube B.  i.e. this functionality is similar to reconstruct in BW 3.5.  Now we are running on BI 7.0 SP 9.
    Can anyone answer this ASAP.
    Many Thanks,
    Ravi

    You cannot load request by request, " Get Data By Request " DTP loads request by request on " First In First Out " basis. You can run some Pseudo/Fake DTP's if you dont want to load data from a particular request.
    If the user deletes a request from Cube A, it wont be loaded to Cube B but if it is already loaded in Cube B and later the user deletes the request from Cube A you have to delete the request frm Cube B. Inorder to monitor request by request run DTP with   " Get Data By Request set.

Maybe you are looking for