Cube to Cube vs ODS to Cube good for loading performance

BW Experts,
I have an option of loading from Cube to Cube or ODS to Cube. It has data in 20 m records.
So in terms of performance of loading perspective, which one is better..
Thanks in advance,
BWer

Hi,
of course (but you didn't mention that the level of detail was different) it's depending on if there's alot more data in the DSO than in the cube.
A DSO is 1 table which is easier to extract data from and build additional indexes on to speed up load time.
The Info cube is as you know more compelx built with the multi dimensional model.
Kind regards
/martin

Similar Messages

  • BI Statistics Highly Aggregated cube 0TCT_CA1 poor load performance

    BI Statistics Highly Aggregated cube 0TCT_CA1 load ffrom DataSource 0TCT_DSA1 has very poor load performance.
    In our DEV BW, it ran 8 min for 12,000 records. We have even worse performance in test box.
    Initial loads then run very long since DataSource 0TCT_DSA1 does not allow us to load by calendar month.
    If you have seen this issue, please let me know.
    Jay Roble
    General Mills

    Compressing the cube would not help, since the cube is empty & we are trying to load 90 days of history.
    The source table has an index on the timestamp field that the extractor uses in it's delta loads.
    The loads run very slow, even with index dropped & no PSA.
    We know that in production there will be appx rows 400,000 loaded 14,400 added with daily delta loads due to aggregation.
    So we are seeing slow delta loads in our QA testing.
    Not sure why the extractor can't just deliver the 14K aggregated rows vs. 400K.
    Note: DS 0TCT_DSA1 has no selction criteria when doing initial full loads.

  • ODS to CUbe Zero Record Load

    Hi,
    A daily load from ODS to Cube is scheduled but the source may or may not have records. so every time the source hass zero records the load happens till the ODS and the status for the ODS remains yellow.
    I manually change it to green and activate the request so that the process continues. Is there a way to automate this so that eventhough there are zero records in the load the ODS is activated.
    Help would be appreciated

    Hi
    You can do this.
    RSMO>select your request>settings>Evaluation of requests(traffic lights)>
    IF no data is available in the system the request-->select the radio button.Status Successful..
    Hope it helps
    Thanks
    Teja

  • Split of Cubes to improve the performance of reports

    Hello Friends . We are now Implementing the Finance GL Line Items for Global Automobile operations in BMW and services to Outsourced to Japan which increased the data volume to 300 millions records for last 2 years since we go live. we have 200 Company codes.
    How To Improve performance
    1. Please suggest if I want to split the cubes based on the year and Company codes which are region based. which means european's will run report out of one cube and same report for america will be on another cube
    But Question here is if I make 8 cube (2 For each year : 1- current year comp code ABC & 1 Current Year DEF), (2 For each year : 1- Prev year comp code ABC & 1 Prev Year DEF)
    (2 For each year : 1- Arch year comp code ABC & 1 Archieve Year DEF)
    1. Then what how I can I tell the query to look the data from which cube. since Company code is authorization variable so to pick that value  of comp code and make a customer exit variable for infoprovider  will increase lot of work.
    Is there any good way to do this. does split of cubes make sense based on company code or just make it on year.
    Please suggest me a excellent approach step by step to split cubes for 60 million records in 2 years growth will be same for
    next 4 years since more company codes are coming.
    2. Please suggest if split of cube will improve performance of report or it will make it worse since now query need to go thru 5-6 different cubes.
    Thanks
    Regards
    Soniya

    Hi Soniya,
    There are two ways in which you can split your cube....either based on Year or based on Company code.(i.e Region). While loading the data, write a code in the start routine which will filter tha data. For example, if you are loading data for three region say 1, 2, and 3, you code will be something like
    DELETE SOURCE_PACKAGE WHERE REGION EQ '2' OR
    REGION EQ '3'.
    This will load data to your cube correspoding to region 1.
    you can build your reports either on these cubes or you can have a multiprovider above these cubes and build the report.
    Thanks..
    Shambhu

  • Noncumulative Cube Copy for Inventory IC where compression is not possible

    Dear All
    We have got Month (0RT_C37) and Week (0RT_C36) Inventory Infocubes. Compression has never been done on any of this cube. The data is fine and result is coming correct but output is very slow. We have done the initialization on this cube on mid of 2008. We have to copy the entire cube to a ZCube and do the compression performed on that Zcube. Our system is BW7.3 SP Level11 and we are going to upgrade to 7.4 .  We tried to move the marker values(RECORDTP=1) to the Zcube with DTP setting (Initial Non-cumulative for Non-cumulative values) which is not happening. But other values with movements we are able to move. The only OSS note found to address this issue is1426533 which is not relevant for us. Is there any workaround for the situation where no compression is ever happened on the cube and the compression is impossible due to some issues, hence we want to copy the entire data to another Zcube and perform the compression there.
    Regards
    Reshoi

    Hi Reshoi,
    There is no DSO for 2LIS_03_BF in Business Content as far as I know. It is possible to create a DSO yourself. Please review SAP Note 581778 - ODS capability of extractors from inventory management and the related SAP Notes carefully if you want to go into this direction.
    Please have a look at the document Re-initialization of the Material Stocks/Movements cube (0IC_C03) with 2LIS_03_BF, 2LIS_03_BX and 2LIS_03_UM in BW 7.x for contains a step-by-step instruction and lots of valuable information.
    Re. the downtime, there are ways to minimize it. Please see SAP Note 753654 - How can downtime be reduced for setup table update. You can dramatically reduce the downtime to distinguish between closed and open periods. The closed periods can be updated retrospectively.
    Please refer to document How to Handle Inventory Management Scenarios in BW (NW2004) for an evaluation of different Inventory Management scenarios.
    I also would like to recommend you to review SAP Note 419490 - Non-cumulatives: poor query performance for 5 performance tips.
    Last but not least, you might want to have a look at the following SAP Notes:
    SAP Note 436393 - Performance improvement for filling the setup tables;
    SAP Note 602260 - Procedure for reconstructing data for BW.
    Best regards,
    Sander

  • BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Hi Masters,
    I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
    Some of the querys is taking more 15 to 20 minutes to execute the report.
    This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
    Current service pack we are using is
    SAP_BW 350 0016 SAPKW35016
    FINBASIS 300 0012 SAPK-30012INFINBASIS
    BI_CONT 353 0008 SAPKIBIFP8
    SEM-BW 400 0012 SAPKGS4012
    Best of Luck
    Chris
    BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Ravi,
    I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
    they resolved it.
    Regards,
    Chris

  • Can you please help me out the Info Cubes info for Budget Execution?????

    Hi,
       Can you please help me out the Info Cubes info for Budget Execution and Cash Management.
    Thanks in advance,
    Andy

    Take the memory card out of the camer and put it in a card reader.  When it mounts on the desktop select it and try to change the name.  However, that might make it incompatible with the camera.  You'll just have to try and see.
    OT

  • Best practices for loading apo planning book data to cube for reporting

    Hi,
    I would like to know whether there are any Best practices for loading apo planning book data to cube for reporting.
    I have seen 2 types of Design:
    1) The Planning Book Extractor data is Loaded first to the APO BW system within a Cube, and then get transferred to the Actual BW system. Reports are run from the Actual BW system cube.
    2) The Planning Book Extractor data is loaded directly to a cube within the Actual BW system.
    We do these data loads during evening hours once in a day.
    Rgds
    Gk

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

  • Standerd BI Content DataSources and Cubes used for Cement industry

    HI,
        Kindly any one sugest the list of standerd BI Content DataSources,
       and standerd BI Content Cubes used for Cement industry
       related projects.
      Standerd BI Content Datasources and cubes for modules
      SD, Product costing , Inventry, and GL.
      Kindly Sugest.
      Regard,
      prasad.

    Hi,
    You can use normal InfoCubes/DataSource for the Cement Industry, recently I saw the design of the Cement Industry BI system, there also same InfoCubes and DataSourecs they used.
    For Cubes and DataSource details see in SDN..
    https://wiki.sdn.sap.com/wiki/display/BI/BWSDMMFIDATASOURCES
    Thanks
    Reddy

  • ERROR - 1270027 - Cannot proceed while the cube is being loaded.

    Helloo,
    I am getting the following error messages from loading to ASO from MaxL
    ERROR - 1270027 - Cannot proceed while the cube is being loaded.
    ERROR - 1241101 - Unexpected Essbase error 1270027.
    Any Help would be appriciated.
    Thanks
    SJ

    If you initialized a load buffer and your process did not complete or you did not load the buffer or destroy it, then you would get the error. Try destroying the load buffer and try again

  • Cube.obj wont load

    cube.obj wont load, I installed java3d, all other java3d tutorials work except the ones with wavefront .obj models. My code is:
    //Comment out the following package statement to compile separately.
    package com.javaworld.media.j3d;
    import java.awt.*;
    import java.awt.event.*;
    import java.io.*;
    import javax.media.j3d.*;
    import javax.vecmath.*;
    import com.sun.j3d.loaders.*;
    import com.sun.j3d.loaders.objectfile.*;
    import com.sun.j3d.utils.geometry.*;
    import com.sun.j3d.utils.universe.*;
    * Example06 uses Sun's ObjectFile to load in Wavefront OBJ
    * format 3D content.
    * <P>
    * Note that using the default ObjectFile settings does read
    * in the data, but does not give the Java 3D runtime enough
    * information to display the file correctly after parsing
    * it. The world looks empty, though the object has been
    * loaded.
    * <P>
    * To display and interact with OBJ content in Java 3D,
    * please use Sun's more robust demo application, ObjLoad.
    * (Available in the Java 3D examples download from Sun's site.)
    * <P>
    * This version is compliant with Java 1.2 and
    * Java 3D 1.1 Beta 2, Nov 1998. Please refer to: <BR>
    * http://www.javaworld.com/javaworld/jw-01-1999/jw-01-media.html
    * <P>
    * @author Bill Day <[email protected]>
    * @version 1.0
    * @see com.javaworld.media.j3d.Example04
    * @see com.sun.j3d.loaders.Scene
    * @see com.sun.j3d.loaders.objectfile.ObjectFile
    public class Example06 extends Frame {
    * Instantiates an Example06 object.
    public static void main(String args[]) {
    new Example06();
    * The Example06 constructor sets the frame's size, adds the
    * visual components, and then makes them visible to the user.
    * <P>
    * We place a Canvas3D object into the Frame so that Java 3D
    * has the heavyweight component it needs to render 3D
    * graphics into. We then call methods to construct the
    * View and Content branches of our scene graph.
    public Example06() {
    //Title our frame and set its size.
    super("Java 3D Example06");
    setSize(800,600);
    //Here is our first Java 3d-specific code. We add a
    //Canvas3D to our Frame so that we can render our 3D
    //graphics. Java 3D requires a heavyweight component
    //Canvas3D into which to render.
    Canvas3D myCanvas3D = new Canvas3D(null);
    add(myCanvas3D,BorderLayout.CENTER);
    //Turn on the visibility of our frame.
    setVisible(true);
    //We want to be sure we properly dispose of resources
    //this frame is using when the window is closed. We use
    //an anonymous inner class adapter for this.
    addWindowListener(new WindowAdapter()
    {public void windowClosing(WindowEvent e)
             {dispose(); System.exit(0);}
    //Use SimpleUniverse. Move viewing position so we can
    //see everything in our scene.
    SimpleUniverse myUniverse = new SimpleUniverse(myCanvas3D);
    BranchGroup contentBranchGroup = constructContentBranch();
    myUniverse.addBranchGraph(contentBranchGroup);
    ViewingPlatform myView = myUniverse.getViewingPlatform();
    TransformGroup myViewTransformGroup = myView.getViewPlatformTransform();
    Transform3D myViewTransform = new Transform3D();
    myViewTransform.setTranslation(new Vector3f(2.0f,0.0f,11.0f));
    myViewTransformGroup.setTransform(myViewTransform);
    * constructContentBranch() is where we specify the 3D graphics
    * content to be rendered. Here we read in a square using
    * Sun's OBJ loader, then return this to be rendered. This
    * square could be replaced with more complicated content exported
    * from 3D modeling programs supporting the OBJ format.
    private BranchGroup constructContentBranch() {
    ObjectFile myOBJ = new ObjectFile();
    Scene myOBJScene = null;
    //Attempt to load in the OBJ content using ObjectFile.
    try {
    myOBJScene = myOBJ.load("cube.obj");
    } catch (FileNotFoundException e) {
    System.out.println("Could not open OBJ file...exiting");
    System.exit(1);
    //Construct and return branch group containing our OBJ scene.
    BranchGroup contentBranchGroup = new BranchGroup();
    contentBranchGroup.addChild(myOBJScene.getSceneGroup());
    return(contentBranchGroup);
    }

    Most probably the obj file is not in the right directory (it must be in the working directory as I have seen from the code).

  • CME required CUBE license for ISP SIP Trunk

    Hi,
    We have IP telephony setup with CME 9.1 with 2921 ISRG2 router.
    Client would like to take ISP SIP trunk.
    Do we require CUBE license for the same.Because I tried in the 2851 router without the CUBE  license and its working fine.
    How I can check CUBE license installed my CME.
    When I check license in CME I can see that UC K9 and IPBasek9 license are permanent.
    Do this enough for the SIP trunk configuration.
    Thanks & Regards
    Nithin Louis.

    Just to add to George answer - CUBE licenses are somekind "honor" based right now which means you can configure everything and it will work but you need license paper for proving that you have adequate number of CUBE licenses for your system.
    Maybe in future Cisco will improve this with option that you will need to install license and only then it will work - like many other licensing right now...
    BR,
    Dragan

  • APO info cube creation and loading

    Hi,
    Can some buddy tell me Info Cube Creation and Loading Cycle, step by step .....?
    Regards

    Hi R S D,
    The following link will guide you to create infocube creation as you require.
    http://help.sap.com/saphelp_scm70/helpdata/EN/23/054e3ce0f9fe3fe10000000a114084/frameset.htm
    The menupath is modelling > infoproviders > infocubes > dimension > creating infocubes.
    Hope you got your solution.
    Please confirm
    Regards
    R. Senthil Mareeswaran.

  • Infocubes?ODS?Multiprovider?Infoset?-Performance?????

    Hello Experts,
          Infocubes?ODS?Multiprovider?Infoset?
    In our design stage we had no problem creating ODS and Cube's but as our requirements kept changing we had to create Infosets rather than Multicubes as we were not having common fields between Cube and ODS. Secondly we couldn't have cubes as Infosets are built between ODS and Infoobjects.
          Can someone throw some light on the performance issues based on our current design were we have ODS and Infosets built. What are the pros and cons between
    1) InfoCubes and ODS
    2) MultiCubes and Infosets
          Any kind of thoughts would be of great help in our analysis.
    Thanks,
    Priya
    Guys anyone who faced difficulty in analyzing such issues please do post your views.

    We has the same requirements ...and had to build a lot of info sets ....
    Reporting on any of the Navigational Attributes is NOT possible on an Info Set.
    Go to RSRT and look on what fields on which the query is doing a Sequential read...if required try to build indexes on that fields.
    In some cases we cannot restrict the queries ...if you are queering on yearly reports ...running against InfoSet's is not at all recommended......
    For better performance of your queries try to build secondary indexes and as far as possible try to avoid InfoSet ....BW 3.5 has ADP try to workout something on that if u are on 3.5.
    For performance try to compress your request in your cubes on your  BP.
    Let me know
    Message was edited by: Suresh Kaudi
    Message was edited by: Suresh Kaudi

  • Loading performance of the infocube & ODS ?

    Hi Experts,
    Do we need to turn off the aggregates on the infocubes before loading so that it will decrease the loading time or it doesn't matter at all, I mean if we have aggregates created on the infocube..Is that gonna effect in anyway to the loading of the cube ? Also please let me know few tips to increase the loading performance of a cube/ods. Some of them are
    1. delete index and create index after loading.
    2. run paralled processes.
    3. compression of the infocube , how does the compression of an infocube decrease the loading time ?
    Please throw some light on the loading performance of the cube/ods.
    Thanks,

    Hi Daniel,
    Aggregates will not affect the data loading. Aggregates are just the views similar to InfoCube.
    As you mentioned some performance tuning options while loading data:
    Compression is just like archiving the InfoCube data. Once compressed, data cannot be decompressed. So need to ensure the data is correct b4 Compressing. When you compress the data, you will have some free space available, which will improve data loading performance.
    Other than the above options:
    1.If you have routines written at the transformation level, just check whether it is tuned properly.
    2.PSA partition size: In transaction RSCUSTV6 the size of each PSA partition can be defined. This size defines the number of records that must be exceeded to create a new PSA partition. One request is contained in one partition, even if its size exceeds the user-defined PSA size; several packages can be stored within one partition.
    The PSA is partitioned to enable fast deletion (DDL statement DROP PARTITION). Packages are not deleted physically until all packages in the same partition can be deleted.
    3. Export Datasource:The Export DataSource (or Data Mart interface) enables the data population of InfoCubes and ODS Objects out of other InfoCubes.
    The read operations of the export DataSource are single threaded (i.e. sequential). Note that during the read operations u2013 dependent on the complexity of the source InfoCube u2013 the initial time before data is retrieved (i.e. parsing, reading, sorting) can be significant.
    The posting to a subsequent DataTarget can be parallelized by ROIDOCPRMS settings for the u201Cmyselfu201D system. But note that several DataTargets cannot be populated in parallel; there is only parallelism within one DataTarget.
    Hope it helps!!!
    Thanks,
    Lavanya.

Maybe you are looking for

  • Is there a quick way to delete all songs on my iPhone 4, running 5.0.1?

    The last time i synced my iPhone on iTunes i must have chosen the wrong option to put 2 new songs onto my Playlist (on iTunes) and then i chose Autofill. What happened was all/most of my songs duplicated. i did accidentally chose to the duplication o

  • Load ALV with Excel Inplace view

    Hi. I'm looking for a way to set the ALV so it will display as Excel-inplace view when it first comes up. As far as I know this is not possible. Thanx. Ayal Telem.

  • Difference between building/consuming web service in ERP vs. XI (PI)

    Hi All, Can someone direct me to the pros and cons of whether to build/consume a web service in ERP (ABAP - SE80, etc.) versus in XI?  We have ECC 6.0 and the latest version of Web AS running - so no issues with that.  I'm just wondering - if we can

  • AAA on 3750 switch

    How to disable AAA on 3750 switch which has got screwed up due to missing of tacacs-server key command in older configuration. I believe RMON mode will not work...

  • Query for consumption

    Folks, I have created a query for consumption figures. i would like to have the figures summed together for the same material. How to group these? MdZ