Semantic Partition in DSO BW7.3

Hi experts,
I am working with Bw7.3. I would like to know if I do a semantic DSO partition by year or plant, which option do I have if for example appears a new plant or begins a new year that I do not have in my DSO. Does it exist a automatic process or which is the best practise for manage it?
Many thanks!
Judith.

The SPO wozard places an netry in table "rrkmultiprovhint".
This entry provides a hintto the OLAP engine so it knows how the cubes are positioned.  Using the characteristis specified in the table, a quick read is done on the dimension table of each cube.  If the value is not found then no sub query is spawned on the infocube.
Using rrkmultiprovhint enables multiple values to be in the cube, whereas when you specify a constant only one value can be read..
True this is not quite as efficient as specifying constants but it is a lot more flexible and the read on the dimension table is quick (as opposed to scanning the fact table)

Similar Messages

  • Semantic Partitioning Delta issue while load data from DSO to Cube -BW 7.3

    Hi All,
    We have created the semantic partion with the help of BADI to perform Partitions. Everthing looks good,
    first time i have loaded the data it is full update.
    Second time i init the data load. I pulled the delta  from DSO to Cube . DSO is standard where as Cube is partition with the help of semantic partition. What i can see is the records which are updated in Latest delta are shown into the report rest all are ignored by the system.
    I tried compression but still it did not worked.
    Can some has face this kind
    Thanks

    Yaseen & Soujanya,
    It is very hard to guess the problem with the amount of information you have provided.
    - What do you mean by cube is not being loaded? No records are extracted from DSO? Is the load completing with 0 records?
    - How is data loaded from DSO to InfoCube? Full? Are you first loading to PSA or not?
    - Is there data already in the InfoCube?
    - Is there change log data for DSO or did someone delete all the PSA data?
    Sincere there are so many reasons for the behavior you are witnessing, your best option is to approach your resident expert.
    Good luck.
    Sudhi Karkada
    <a href="http://main.nationalmssociety.org/site/TR/Bike/TXHBikeEvents?px=5888378&pg=personal&fr_id=10222">Biking for MS Relief</a>

  • Data load from semantic partition info-provider

    Hi,
    We are trying to load transaction data from a multiprovider which  is built on "Semantic Partitions" objects.  Now when i try to do it on semantic partitions objects then it is throwing error.  However, the same  works when i do it on individual cubes.
    Here is the log
    [Start validating transformation file]
    Validating transformation file format
    Validating options...
    Validation of options was successful.
    Validating mappings...
    Validation of mappings was successful.
    Validating conversions...
    Validation of the conversion was successful
    Creating the transformation xml file. Please wait...
    Transformation xml file has been saved successfully.
    Begin validate transformation file with data file...
    [Start test transformation file]
    Validate has successfully completed
    ValidateRecords = YES
    Error occurs when loading transaction data from other model
    Validation with data file failed
    I was wondering if anybody has implemented data load package with semantic partition infoprovider. 
    We are on
    BI- 730, SP 7,
    BPC 10.0, SP 13
    Thanks
    prat

    Hello,
    BPC provides its own process chains for loading both transaction and master data from BW InfoProviders.  As far as I know that is the only way to load data from other sources into BPC.
    Best Regards,
    Leila Lappin

  • Setting up semantic partitions ASE

    Running  ASE 15.5
    We have a  non-partitioned table  with 640 million rows. We are looking at setting up range semantic partitions  by create date
    There are 200 + million rows for 2013 and 380 + million rows for 2014
    I am thinking about setting up following partitions by create date:
    Partitions 1  -  4 :  For 2015  By quarter   
    Partition 5 -  For year 2014
    Partition 6   -  2013 and and earlier
    Add new Partitons for each new  year  ...
    Only updating  current data  --  i.e. any data more than month old is no longer updated ..
    Is this a viable breakdown ?
    1st attempt at partitioning  ..

    Actually, I would like to comment that there are some nuances with partitioning and stats to be aware of.....but as far as your question goes, a lot depends on your version of ASE.   For pre-ASE 15.7, sampling works, but the best practice taught in classes was to do a full non-sampled stats first, then do 2-3 updates with sampling, then a full non-sampled stats in cycle - so if doing update stats weekly, the first run of the month would be a full non-sampled and other weeks in the month would be sampled.    However, what this is doing is trying to help you determine if stats sampling is working similarly to a non-sampled stats by virtue of the fact that you may have performance issues the latter weeks of the month using sampled stats vs. non-sampled.   How well this works often depends on how values are added to the different columns - e.g. scattered around evenly or monotonically increasing.    I personally have found that in later revs of 15.7 (e.g. sp100+) that running stats with hashing is much faster than stats with sampling and generates more accurate stats.   I know Mark has seen some issues - not sure where/why - but then I have seen problems with just update stats generically in which we have had to delete stats before re-running update stats....so not sure if the problem was caused by starting with non-hashed stats and then trying to update with hashed stats or not (I have always started with hashed stats).
    Now, there is an interesting nuance of update stats with partitioning.    Yes, you can run update stats on partition basis.......but it doesn't update table stats (problem #1) and it can also lead to stats explosion.   I am not saying don't run update stats on a partition basis - I actually encourage it  - but suggest you know what is going to happen.    For example, partitioning - especially range partitioning - works best as far as maintenance commands - when you get in the 30+ partition range - and especially in the 50-100 partition range - assuming evenly distributed partitions.  In your case, you will likely get the same effect on 2014 and 2015 partitions as they will be much smaller.  When you run update stats on each partition, (assuming the default histogram steps) you will get 20 steps PER partition.....which can mean 1000+ for the entire table (if 50 partitions).  Not necessarily a problem unless the query needs to hit all the partitions (or some significant number of them) at which point the query will need considerable proc cache to load those stats.  Sooo......when using partitions, keep in mind that you may need to increase proc cache to handle the increase use during optimization.    On the table stats perspective, what it means is that periodically you might want to run update statistics (not update index statistics) on the table.....however, in my experience this hasn't been as necessary as one would think....and might only be necessary if you see the optimizer picking a table/partition scan when you think it should be choosing an index.
    In your case, you might only have 20 steps for the whonking huge partition and then 20 steps for 2014 and 20 steps for each of the 2015 quarterly partition.    You might want to run the update stats for the 2013 and before partition with a larger step count (e.g. 100) and then run it with 20 or so for the other partitions.
    Using partitions the way you are doing is interesting in a different perspective.   The current data is extremely small and therefore fast access (fewer index levels) and you don't get quite the penalty for queries that span a lot of partitions - e.g. a 5 year query doesn't have to hit 20 partitions the way it would for complete quarterly partitions.   However, this assumes the scheme is:
    Partition 1 = data 2+ previous years
    Partition 2 = data for previous year
    Partitions 3-6 = data for current year by quarter
    Which means at the end (or so) of each year, you will need to merge partitions.   Whenever you merge partitions, you will then need to run update stats again.
    If the scheme instead is just to have historical partitions but going forward each year will simply have data by quarter, you might want to see what the impact on queries are - especially reports on non-unique indexes where the date range spans a lot of partitions or date range is not part of the query.

  • DataLoad into Write-optimized DSO with DTP and semantic groups

    hi gurus,
    i'm going crazy with my current problem.
    i searched in the other posts of this topic, but i did not found a solution.
    here my situation:
    i created a w-o-DSO with semantic key (0ucinstalla, 0calmonth, zbelnum, 0unit) and three key figures.
    now i'm loading from several cubes data into the DSO. (i need the historical data)
    in every transformation, i implemented an expertroutine which collects the data into the result_package.
    in the routine, i even clear the record number before collecting the result_rows.
    in the DTPs, i'm using a semantic group for 0ucinstalla in order to select all rows for one 0ucinstalla into one data-package.
    each DTP has to run one time in full mode.
    but when i schedule the third DTP, i get the message: "duplicate record"
    i checked the active data in dso. there was no record with that semantic key.
    i found a record with the same 0ucinstalle for a diferent month - in my opinion, thats no duplicate record...
    is there a dependence between semantic key in DSO and semantic group in DTP?
    how can i solve this error?
    regards,
    philipp

    Hi,
    thx for your fast replies!!
    @ Passing by:
    I know the option "Do not check uniqueness of data".
    At DSO do not arrive any duplicates and i would like to use the check of uniqueness in future, if there really arrive duplicate records.
    @ Durgesh Gandewar: thx for the hint, but i also checked this website...
    Regards,
    Philipp

  • A infocube or DSO Needs to be Partitioned

    Hi All,
    1.I am having a compensation management infocube (0PACM_C01) i know how do partition using either 0CALMONTH or OFISCPER and how to prove that it needs partitioning? why we have to go for it? any issues if i go for partioning?
    2. The infopackages of HC_EMPLIST_SNP_20XX (ZPAPA_C10) are sequential in process chain , can we go for parallel processing if so what are limitations?

    Hi,
    Do a search on the WIKIs, blogs and threads here. There is loads of content on this available. Also depending on your BW version, if you are 7.3+ the consider Semantic Partitioning.
    Leveraging Semantic Partitioning Cube in Planning and Consolidation
    Regards,
    Michael

  • DSO Logical Partitioning

    I've seen lots of discussions on Logical Partitioning of InfoCubes, but not a lot on Logical Partitioning of DSO Objects. 
    Can anyone talk to Logical Partitioning of DSO's or point me to some examples?
    Thanks

    Kenneth,
    Logical partitioning refers to splitting the data into smaller logical chunks by way of design ... for example having two cubes with one cube for data for 2008 and one for 2009 would be an example of logical partitioning.
    Similar approaches can be taken for DSO's as well - have separate DSOs one for each country... things like that.
    Also please note that this is not the same as physical partitioning - which refers to the usual cube compression etc....

  • My DSO Is getting to large?

    The BI Early Watch report came out and said one of my DSO shold be reduced.
    I looked at it and here are some stats for 2010:
    201001     3,484,151
    201002     3,056,019
    201003     3,274,405
    201004     3,392,412
    201005     18,627,126
    201006     18,180,714
    201007     20,523,631
    201008     30,810,975
    201009     59,565,650
    201010     50,912,680
    201011     80,746,518
    201012     89,236,514
    Total 2010 ->     381,810,795
    You can see that it is pretty much inceasing every month.
    So my questions are:
    1) so what? ... is it bad that it's getting so large?
    2) what are the rest of you doing when you get large DSO's ... sure ly you aren't creating a new DSO for every month when you experience this.
    Thanks,
    Mike

    Hi Mike,
    I can see implementing new products causing the data load to increase in size in relation to the number of new products created. Do you expect new products to continue being created at the same accelerating rate? What sort of data is this?
    Regarding semantic partitioning, that is just fancy-talk for exactly what you suggest originally - splitting up the DSO into multiple DSOs based on some semantic key, like time (fiscal period) or product (different ranges of products in different DSOs). In BW 7.3 this becomes a lot easier with the semantically-partitioned object (SPO), but before that it can be quite a pain.
    The primary problem with DSOs that are getting as large as the one you have here is logistical. It should continue working, but it may start taking relatively longer and longer to load because index maintenance will get slower as the number of rows in the DSO gets larger. Eventually you will get to the point where the load will run so long that the job is likely to be interrupted intermittently by server issues. If the DSO is a write-optimized type DSO that would probably help alleviate the problem quite a bit as you will not need to activate the data.
    You also can't partition DSOs at the database level, so if it gets really enormous you could run into deeper trouble, but you should talk to your DBA about the point at which this will become an issue.
    Ethan

  • Is there a way of partitioning the data in the cubes

    Hello BPC Experts,
    we are currently running an Appset with 4 Applciations. Anyway two of these are getting really big.
    In BPC for MS there is a way to partitioning the data as I saw in the How tos.
    In NW Versions the BPC queries the Multiprovider. Is there a way to split the underlying Basis Cube to several (split by time or Legal Entity).
    I think this would help to increase the speed a lot as data could be read in parallel.
    Help is very much appreciated.
    Daniel
    Edited by: Daniel Schäfer on Feb 12, 2010 2:16 PM

    Hi Daniel,
    The short answer to your question is that, no, there is not a way to manually partition the infocubes at the BW level. The longer answer comes in several parts:
    1. BW automatically partitions the underlying database tables for BPC cubes based on request ID, depending on the BW setting for the cube and the underlying database.
    2. BW InfoCubes are very different from MS SQL server cubes (ROLAP approach in BW vs. MOLAP approach usually used in Analysis Services cubes). This results in BW cubes being a lot smaller, reads and writes being highly parallel, and no need for a large rollup operation if the underlying data changes. In other words, you probably wouldn't gain much from semantic partitioning of the BW cubes underlying BPC, except possibly in query performance, and only then if you have very high data volumes (>100 million records).
    3. BWA is an option for very large cubes. It is expensive, but if you are talking 100s of millions of records you should probably consider it. It uses a completely different data model than ROLAP or MOLAP and it is highly partition-able, though this is transparent to the BW administrator.
    4. In some circumstances it is useful to partition BW cubes. In the BW world, this is usually called "semantic partitioning". For example, you might want to partition cubes by company, time, or category. In BW this is currently supported through manually creating several basic cubes under a multiprovider. In BPC, this approach is not supported. It is highly recommended to not change the BPC-generated Infocubes or Queries in any way.
    5. If you have determined that you really need to semantically partition to manage data volumes in BPC, the current best way is probably to have multiple BPC applications with identical dimensions. In other words, partition in the application layer instead of in the data layer.
    Hopefully that's helpful to you.
    Ethan

  • Semantic and Logical partions

    Hi All,
    Please can someone help me to understand what are the advantages of Semantic partinitioning of BW 7.3  over Logical partioning of BI 7.0
    thanks

    That is a good question. A semantically partitioned cube is not the same as a multiprovider, and the table RRKMULTIPROVHINT is specifically designer for MultiProviders so my initial thoughts are that it is not required to maintain this.
    However, there are many similarities between a MP and a partitioned cube, and you want to make sure that at runtime only the relevant partitions are accessed.
    I would expect that SAP automatically provide DB Hints based on the definition of the partitioning. The reason why this does not happen with MultiProviders is that it is hard to automatically 'predict' which characteristics are used for partitioning. In the semantic partitioned cube, this is defined in the system so SAP should take this into account at execution time.
    Jan.

  • DB partition Pruning

    Hello, Someone told me that HANA db partition will not prune and the advantage of using Semantic Partition over DB partition is pruning. Does it correct? Thanks, Amir

    This is not correct.
    SAP HANA does support partition pruning on DB level.
    Using SPOs in SAP BW on HANA however provides a lot of additional benefits that can only be leveraged in the application/BW layer.
    Thus, the recommendation to use SPOs is valid.

  • BADI for Sematic Partition

    Hi ,
    I have done with sematic partitioning . I would like to automatic the characteristic for the filters used. I would like to use the BADI to perform this task. When i am trying to check the BADI on the filters screen it is disable. Please can some one help how does the check box would be enable on the filters screen of semantic partition to pic the required field for BADI.
    Help is appreciable.
    Thanks

    HI,
    First select your SPO object, click on Edit and is should take you to the Edit Semanticall Partitioned object, select Modelling and select 'Change partitions' which are to the left hand side of the screen and it will take you to the Maintenance Of Criteria for Partitioned Object and then select the drop down of 'Build Version From' where you will find BADI Implementation.
    If you are not able to find the BADI Implementation here, then you have to check with installation team once before you proceed further.
    Regards,
    Subba Rao M

  • SAP-BW. 7.30 Differentiators and Upgrade

    Hi All
    We are in the process of consider upgrade of our present BI-7.0 (EhP1) to the latest BW 7.30 version,I was able to read a document released by SAP on BW7.3 in the month of April this year and also read all the blogs related to 7.3 , but I would like to understand the impact or benefits our company will get when we upgrade our BW system to latest version in detail So
    I would like to know
    1)BW 7.3 differentiators
      a) What additional features are being added?  ( I know a new function called Dataflow Generation wizard , and changes in the
          Infopackage for automatic Init loads , and semantic partions of DSO and Cubes are added, But would like to what are other
           features or changes which are added ?)
    b)How BO integration works with BW 7.3?( As the document says , it will easy to integerate BOE , may you please detail me how
        and why the integeration is simplified , and how the integeration of Crystal report server or Xeclius tool will work , Is Cystal
       Report server or Xeclius provided as a default Add on to the new SAP Netweaver BW7,3 server, or will we have to maintain
      them seperately on different machines/servers?)
    c)What are the front-end tools for BW 7.3? ( are any new front end tools added, will our BEx Query designer or explorer are still
        supported with 7.3?)
    2. Upgrade options/considerations
         a)How to upgrade to BW 7.3?( presently we are SAP netweaver 2004s 7.0 and recently installed EhP1 , so for BW7.3 we
             need to install EhP3 or do we need to change for present server
          b)Prerequisites to the Upgrade( Any hard ware limitation, or any server specifications or any mandatory things like BWA?)
          c)What are the minimum H/W and Front-end requirements?
    3. Compatibility
        a)What are the ECC, CRM, SRM,.... versions that BW 7.3 will be compatible?
        b)How does the BEx reports will work on BW 7.3?
        c)Is WAD supported on BW 7.3?
        d)Will Admin Cockpit works on BW 7.3?
        e)Are BW security roles work on BW 7.3?
    Thanks in Advance
    Krishna
    NOTE: I had already gone through the SAP PPT released in the month of April , But was not able to get answers to my above question , so request you not to point again to the same PPT .
    Edited by: krishna dvs on Oct 20, 2010 8:48 AM

    Hello All,
    I am also doing a BW7.3 upgrade and wondering if you have a project plan I could leverage.
    Sorry to impose like, however I am at a disadvantage since I never did a BW upgrade before.
    Would you be able to assist?
    Thanks so much.
    Russ Smith (Chicago)....
    C 847-910-8360

  • Function module to read data from a SPO

    Hi guys,
    Inside a transformation a have a rule, ABAP routine type, in wich I need to read data from a SPO (semantically partitioned object). Is there an ABAP function module or Class that allows to read data from a SPO?
    The SPO from wich I need to read the data is DSO based.
    Thanks in advance.
    David.

    Hi David,
    The normal procedure is using Function Module RSDRI_INFOPROV_READ. However, according to the documentation it only works for DSOs, InfoCubes and MultiProviders.
    Could you access as a work-around a MultiProvider? Or use any logic to first determine which PartProvider of the SPO-based DSO or InfoCube is required and then use the Function Module to read the data?
    Best regards,
    Sander

  • SAP BW 7.31 - Transport Issue with Generated Objects

    Team,
    We have two landscapes, Business As Usual (BAU on SAP BW 7.01SP8) and Project Landscape (on SAP BW 7.31 SP10). The Project landscape Development system is a copy of BAU landscape development System and got upgraded to 7.31. Same case with QA System also.
    Issue: We are trying to collect InfoProviders (which are created in 7.01 SP8) in to a transport request in 7.31 System, it’s not allowing to collect in a transport request. Getting the below message.
    Object CUBE 0FIGL_R10 is generated and cannot be transported
    Message no. RSO887
    Diagnosis
    The object CUBE 0FIGL_R10 is generated (for example, for a semantically partitioned object) and cannot be transported.
    System Response
    The object is not written in a transport request.
    Procedure
    You do not need to do anything. The main object (the semantically partitioned object) simply needs to be transported. Here the object CUBE 0FIGL_R10is automatically created in the target system.
    Could you please let me know if you come across the above situation and solution.
    Warm Regards,
    Surya

    Hi,
    you are not yet collected the cube CUBE 0FIGL_R10.
    go to RSA1 -> select the transport connection -> select the object types -> select the info cube -> expand the node -> double click on the select object -> give the cube name  CUBE 0FIGL_R10 -> select the transfer selection.
    collect the dependent objects like info package, ds, dso, cube, dtp, transformations.
    now click on the truck symbol form standard tool bar.
    now it will collect.
    then do transport to target system.
    Thanks,
    Phani.

Maybe you are looking for

  • Syst fields

    What is the diff between SY-TABIX and SY-INDEX?

  • Create a Webpart with Sharepoint Designer to see and upload documents in document set

    Hello, In a document set, we see documents in this document set and we can upload files and documents only in this document set. Can i save this view to other sites in webPart ? I tried to use sharepoint designer saving the dataview in a file and the

  • Forcing a user to save a record

    I'm looking for some help (a keep it simple solution). Within a form how would I have a message (alert) displayed that would tell the user that they must save the current record before proceeding. conditions, 1. A user enters a record. 2. before the

  • Display SMPTE timecode in Flash Pro

    Is there a way to display SMPTE timecode instead of raw framecount in Flash Pro CS5? I don't mean burned into an exported movie. I mean in Flash Pro so a person can use it developing a project. Is there a Flash plugin, maybe, that does this?  Ideally

  • How to add a preview picture to video?

    Hi I've imported an offline video to my Flash CS4 movie.  As it is, the video window is black with the play controls.  How do I make it show a snaphot of one of the video frames so it's clear what the video is about? Thanks Shaun