Partition of ODS

hi
   can u explain how to partition ODS and steps to do that.
regards,
manju

Hi,
ODS partitioning is not possible.
But can partition the Active tables
Check this for more,
http://help.sap.com/saphelp_nw04/helpdata/en/e6/bb01580c9411d5b2df0050da4c74dc/frameset.htm
As well this thread also,
Partioning of ODS object
Hope this helps.........

Similar Messages

  • ODS Partitioning ?

    Hi -
    I am looking for guidelines regarding ODS/DSO Partitioning.
    We are dealing with the LO extractor which brings billing conditions into BW/BI  (We are running BI 7).  We expect very large volumes of data from this extractor. 
    We plan to implement a design the follows EDW standards.  First level write-optimized DSO, second level standard ODS, and finally into a cube.  Most of our transformation will occur between the first and second level DSO - including any filtering.
    As we are working out the details of our design, we are trying to determine if we should utilize logical partioning for the first and second level DSO Objects. 
    Are there any guidelines around maximum ODS Size?  Does anyone have reccomendations regarding ODS Partitioning?
    I have searched the forum and found plenty of information on logical partitioning of Cubes but have found nothing regarding ODS objects.
    Points will be awarded....  Thanks for your help.

    Wond -
    Thanks, your answer is helpful.  I guess in terms of loading from the ODS - I will be loading a delta to the cube so the actual load will come from the change log - it will not be by selection.  The change log wont be large so, except for the initial load - it should manageable.
    Regarding activiation.  When activation occurs, determining if an existing record exists would be done via the semantic key... correct?  There would be an index on the key of the ODS - so the search would not be a sequential read through the entire table - it would be and index search.  So, would activation really suffer as you suggested?
    Lets say I decide to partition it - what number of records or size per ODS would I want to achieve...  again, I would look for a guideline from SAP or others from experience to say...  If you are going to go throug the work to partition your ODS you want to keep them under X records or X Size. 
    Any ideas?
    Some points awarded.... some points remain!!  Thanks.

  • Partition ods

    Hi,
    We have 4 ODS (one for month), and then a multiprovider to join them. The records of each ods are around 200.000.000, we are thinking about partitioning the active table. When the month changes, we want to delete one ODS (month - 3), and then we are going to load the new month in this ODS.
    How can we partition the ods from BW? 
    The partitions must be recreated when the new month comes, this can be automatic?
    Thanks
    Victoria

    Partitioning could be done only before the fill of data to ODS
    After filling data repartitioning is possible only in version BI 7.x ( repartitioning is partitioning after filling data in ODS)
    If you want to partition, you can create 4 partition based on months in the active table after emptying out the ODS.Partition cannot be automatic instead you can create partition based on month for 4 months while creating partition. So ODS will store data accordingly in the active table.

  • Reg. table partition

    Hi
    Shall anyone explain about table partition.
    What it is and when it can be used.
    regards
    Sridhar
    [email protected]

    Hi,
    Partitioning is nothing but u can split up the whoe dataset of an Info cube into smaller, physically independant and redundancy free units. Because of this Query performance has increased while reporting, also when u r deleting data from the info cube.
    We can Partition the ODS/Cube based on 0CALMONTH/0FISCYEAR.
    Double click on ur ODS/Cube.
    Extras > Partitioning.
    This is called Physical partitioning.we go fro this partition prior loading data into the info provdider..
    You can even do that after the load, but you have to move the data.
    There is another Partitioning also LOGICAL PARTITIONING.
    Ex: If we have 2 maintain 3 years of Data.
    We will maintain 2005 data in a cube, 2006 data in a cube and 2007 data in cube.
    Different cube for different years....we will group these Cubes under a Multi Provider.
    Check the following link regarding partitioning:
    http://help.sap.com/saphelp_nw04/helpdata/en/0a/cd6e3a30aac013e10000000a114084/frameset.htm
    Partitioning

  • Add new infoobject to LARGE ODS

    I am adding new infoobject to existing ODS. This ODS has over 100million records. It takes forever to add the infoobject.
    I think the issue is SAP does not allow nulls so actually drops all the data and reloads with the new field populated with spaces.
    In our environment the problem is noramally not an issue in DEV since there is limited data but when transporting through the landscape the transport takes forever since in other environments we have lots of data.
    I proved it is not just a transport issue by doing it directly in a system with over 100million records - still took forever and no transport involved.
    Anyone else had a similar problem with large ODS's and any workarounds?
    Thanks.

    SE14 will do the same...
    it is the RDBMS which does that internally; I did once this with a huge table in Oracle with and without logging the operation; the perf is way much better.... on the other hand it's bit risky therefore we usually plan this accordingly (week-end) and just wait until this is finished....
    another option is to extract the whole ODS into PSA; delete it remove secondary indexes, add the IObj and then reload the ODS from PSA and rebuild indexes; this is actually the most secure way, again you'll need to be patient...
    Finally you could perhaps logically partition your ODS in order to avoid having such monster in your DB although 100 mio is still OK; it really becomes a problem when above 500 mio records...
    Olivier.

  • Disadvantage of Partition?

    Hi ALL
    wHAT is tha disadnantage of Partition?

    I'm assuming you're asking about DB (physical)partitions rather than logical partitioning of an InfoCube.
    For InfoCubes, each Request is loaded to a new partition of the F fact table.  You don't control that - it's automatic.  Some partition issues are unique to the DB in use.  <b>Generally, you would be compressing an InfoCube regularly so you would not normally ever have so many partitions on the F fact table that it should ever become an issue.</b>  If you don't compress your cubes, having several hundred partitions in the F fact table will begin to effect query performance, and would certainly effect any index rebuild time if you are dropping and rebuilding indices on a cube as part of your load process. 
    As for partitioning the E fact table, that is optional. It can only be done if the InfoCube is empty. Currently, you can only partition a cube on 0FISCPER or 0CALMONTH, so it is difficult to ever see a problem with having too many partitions of the E fact table.  Even 10 years of data is only at most, about 120 partition. <b>Partition Pruning</b> is the a process the DB uses when it is going to run a query on a partitioned cube.  Let's say you have a cube that has 5 years of data 2001 - 2005.  If you created monthly partitions on 0CALMONTH, you would have 62 partitions, 12 per year, plus 2 that the BW generates, one for any data that might have come before 2001 and a partition for any data coming after 2005.  Now if you run a query that selects data from 001/2005 - 12/2005, the DB prunes (excludes) all the partitions outside of this range, from consideration - so now it only needs to consider 12 partitions of data for the query instead of all 5 years.  Now if your queries usually select on just 0CALYEAR,or 0FISCPER, or 0FISCYEAR, then the DB must look at all the partitions. <b>Queries MUST restrict on the partitioning characteristic for the DB to take advantage of pruing to speed to the query.</b>   
    Now you can partition the E fact table, but if your queries don't restrict on the partitioning characteristic (fiscper or calmonth) you won; see any query benefit.  Partitions can provide some data administration benefit if you perform deletions or archiving based on the partitioing chracteristic - allowing the BW to perfrom a quick Drop Partition rather than having to run a resource intensive Delete query.
    If the cube is very small, it probably isn't worth partitioning, but determining he size to consider partitioning might depend on your quereis and how often they'll run.
    Partitioning a large ODS would have the same type of benefits.  You need to have your DBA partition an ODS since it can not be doen from the Admin Wkbench.  An ODS can be partitioned on oher chars, e.g.Business Area so you have more flexibility.

  • How to improve query performance of an ODS- with 320 million records

    <b>Issue:</b>
    The reports are giving time-outs while execution.
    <b>Scenario</b>:
    We have an ODS having approximately 320 millions of records in it.
    The reports are based on
    The ODS and
    InfoSets based on this ODS.
    These reports are giving time-outs while execution.
    <b>Few facts about this ODS:</b>
    There are around 75 restricted and calculated keyfigures used in the query definition.
    We can’t replace this ODS by cube as there is requirement of InfoSet on it.
    This is in BW 3.5 environment.
    <b>Few things we tried:</b>
    Secondary Indices are created on the fields which are appearing in the selection screen of the reports. It’s not worked.
    The Restriction/Calculation logic in the query definition can be moved to backend. Will it make the difference?
    Question:
    Can you suggest the ways to improve the query performance of this ODS?
    Your immediate response is highly appreciated. Thanks in advance.

    Hey!
    I think Oliver's questions are good. 320 Mio records are to much for an ODS. If you can get rid of the InfoSet that would be helpful. Why exactly do you need it? If you don't need you could partition your ODS with a characteristic and report over an MultiProvider.
    Is there a way to delete some data from the ODS?
    Maybe you make an Upgrade to 7.0 in the next time? There you can use InfoSets on InfoCubes.
    You also could try to precalculation like sam say. This is possible with reporting agent or Information Broadcasting. Then you have it in your cache. Look that your cache is large enough. Maybe you can use a table or something.
    Do you just need to make one or some special reports on a special time? Maybe you can make an update in another ODS writing just the result in it. For this you can use update rules or maybe analysisprocess designer (transaction RSANWB) is the better way.
    Maybe it is also possible to increase the parameter for your dialog-runtime rdisp/max_wprun_time (If you don't know, you basis should. Else look here https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ab254cf2-0c01-0010-c28d-b26d04627e61)
    Best regards,
    Peter

  • How can we speed up delta Process?

    Hi All,
    As per control parameter settings we have max. no processes as 6 and 10 data packages for one Info doc and its the same for that particular infosource settings.
    but when we schedule the delta info package( R/3 - BW ODS) it is using only one process and one (data package per Idoc.
    But when the data is being updated from ODS to cube its using 6 processes ( in SM66) and  almost 6-10 data packages( details tab in scheduler page).
    How can we speed up the delta load from R/3 to ODS, and make it use upto 6 processes.
    Where can I check these settings?
    Thanks in advance.
    Robyn.

    hi
    one of the method to improove the performance in loading to ODS.
    Go for PSA partintioning through RSCUSTV6
    and u can partition the ODS object also RACUSTA2
    then whole data set will split in to the several data packets in PSA and in ODS.
    hope it helps
    bhaskar.

  • B W  doubts.....

    1. How to do Cube copy - data copy from cube 1 to cube 2? And queries from cube1 to cube2?
    2  I have created Info pack1 for attr1 and  for Info pack1 for attr1 and group – sch
    Info pack1 for vendor attr  and scheduled at 10.00 a.m
    Info pack2 for vendor text and scheduled at 10.30 a.m
    Info pack3 for material attr and scheduled at 11.00 a.m
    Info pack4 for material text and scheduled at 11.30 a.m
    Info pack5 for purchasing cube and scheduled at 11.45 a.m
    I maintained all  infopacks under ipack group which scheduled at 11:55a.m.
    If run my info pack group will it start at 11: 55 or it follows the individual info pack timings? Means start at 10.00 am?
    How we maintained this situation in process chain which scheduled at 2:00p.m?
    Is it same in bw 3.x and BI?
    3. What is logical partitioning and physical partitioning?
    4. Shall I do partitioning to ods ?     
    Please let me know.. detail……….
    Thanks in advance...

    Hi srilaxmi:
    You can copy one cube to another using an export datasource. For example to copy ZCUBE1 to ZCUBE2:
    1. Do a right click over ZCUBE1 and choose option "Export DataSource" at the bottom of the menu. This will create a datasource like 8(cube name), for example cube: ZCUBE1, the datasource will be 8ZCUBE1.
    2. Create a new cube ZCUBE2 using cube ZCUBE1 as model. Activate ZCUBE2
    3. Right click over ZCUBE2 and choose "Create Update Rules". Insert "8ZCUBE1" in the infosource field. Activate your update rules.
    4. Create an infopackage for the 8ZCUBE1 datasource and schedule the loading.
    If you want to copy the queries from ZCUBE1 to ZCUBE2 go to transaction UG_BW_RSZC - Copy. Fill the blanks this way:
    Source Infoprovider: ZCUBE1
    Target Infoprovider: ZCUBE2
    In select component choose "Queries", click Execute.
    In the next window choose the queries you want to copy and click on Transfer Selections. In the next window you can rename the query in order to identify it better. Click the check and wait for the confirming message.
    Note that you should copy queries between cubes that have the same structure.
    Hope this helps.
    Regards,
    Karim

  • How to create DB partitioning in active data tables for ods?

    hi all,
    Can anyone let me know how to create DB partitioning in active data tables for ods. if any docs pls share with me at my email id : [email protected]
    regds
    haritha

    Haritha,
    The following steps will briefly explain you to improve the performance in terms of DB partitioning as well as loading. Please find the same,
    transaction RSCUSTA2,
    oss note 120253 565725 670208
    and remove 'bex reporting' setting in ods if that ods not used for reporting.
    hope this helps.
    565725
    Symptom
    This note contains recommendations for improving the load performance of ODS objects in Business Information Warehouse Release 3.0B and 3.1 Content.
    Other terms
    Business Information Warehouse, ODS object, BW, RSCUSTA2, RSADMINA
    Solution
    To obtain a good load performance for ODS objects, we recommend that you note the following:
    1. Activating data in the ODS object
    In the Implementation Guide in the BW Customizing, you can implement different settings under Business Information Warehouse -> General BW settings -> Settings for the ODS object that will improve performance when you activate data in the ODS object.
    1. Creating SIDs
    The creation of SIDs is time-consuming and may be avoided in the following cases:
    a) You should not set the indicator for BEx Reporting if you are only using the ODS object as a data store.Otherwise, SIDs are created for all new characteristic values by setting this indicator.
    b) If you are using line items (for example, document number, time stamp and so on) as characteristics in the ODS object, you should mark these as 'Attribute only' in the characteristics maintenance.
    SIDs are created at the same time if parallel activation is activated (see above).They are then created using the same number of parallel processes as those set for the activation. However:if you specify a server group or a special server in the Customizing, these specifications only apply to activation and not the creation of SIDs.The creation of SIDs runs on the application server on which the batch job is also running.
    1. DB partitioning on the table for active data (technical name:
    The process of deleting data from the ODS object may be accelerated by partitioning on the database level.Select the characteristic after which you want deletion to occur as a partitioning criterion.For more details on partitioning database tables, see the database documentation (DBMS CD).Partitioning is supported with the following databases:Oracle, DB2/390, Informix.
    1. Indexing
    Selection criteria should be used for queries on ODS objects.The existing primary index is used if the key fields are specified.As a result, the characteristic that is accessed more frequently should be left justified.If the key fields are only partially specified in the selection criteria (recognizable in the SQL trace), the query runtime may be optimized by creating additional indexes.You can create these secondary indexes in the ODS object maintenance.
    1. Loading unique data records
    If you only load unique data records (that is, data records with a one-time key combination) into the ODS object, the load performance will improve if you set the 'Unique data record' indicator in the ODS object maintenance.
    Hope this helps..
    ****Assign Points****
    Thanks,
    Gattu

  • Logical partitioning of an ODS - when / what size e.g. 100 Mio records?

    Hi Folks,
    we got an ODS/DSO with about 80 fields and we are currently planning futher rollouts which will lead to an overall volume in the DSO about 100 Mio records.
    I wonder if this volume for a DSO is still fine to do reporting and loading / activation with good performance?
    Or is there a "rule of thumb" to have let's say only 50 Mio reocrds in a ODS and then go for a logical partion approach in larger scenario
    50 Mio -> Region EMEA, APJ
    50 Mio -> Region AMS
    Thanks for you inputs in advance,
    Axel

    100 Mo records is not such a big DSO. You should not encounter problems for loading and/or activating your DSO.
    You may encounter performance problems with reporting  functionnalities but it will depend on the reporting you do. And anyway, if you really want to do reporting on this data, why don't you put this level of detail in the cube (logically partitionned or not).
    You can logically partitionned any kind of infoProvider, but I've never seen this for DSO (I'd rather partition the upper levels and have a DSO with all data).
    Regards,
    Fred

  • ODS partition

    Hello,
    i just want to know  that is it possible to make the partition  of the dso.
    if yes then how to do this, and what is happening in database level.

    Hi Viral,
    You can use Multi dimensional clustering in DSO which is similar to partitioning in Cubes.
    To do this Double click on the DSO in the modeling view of the administration workbench.
    Go to Extras'---> DB Performance -
    > Clustering.
    Multidimensional clustering (MDC) physically organizes a table in blocks. Each block only contains data records with the same values in the MDC dimensions.
    Multi-dimensional clustering can significantly improve the performance of database queries. If the queries contain restrictions on MDC dimensions, only the blocks of the table that contain the corresponding dimension values must be read. For the data access block indexes are used, which are considerably smaller than conventional row indexes and can therefore be searched more quickly.
    Best regards,
    Sunmit.

  • Logical vs. Physical Partitioning

    Hi,
    In a discussion of logical partition the author pointed out that
    “… if a query that needs data from all 5 years would then automatically (you can control this) be split into 5 separate queries, one against each cube, running at the same time. The system automatically merges the results from the 5 queries into a single result set.”
    1. Can you direct me on how to “control this” as the author indicated?
    2. Also, the author noted that
    “… Physical Partitioning - I believe only Oracle and Informix currently support Range partitioning. …”
    a) Does it mean that BW does not use physical partitioning?
    b) Where do we indicate physical or logical partitioning as an option in BW?
    c) Or, when we talk about dimension table, etc. is there always an underlining database such as Oravle, Infomix, etc in BW? If so, what does BW use?
    3. For physical partitions, I read that the cube needs to be empty before it can be partitioned. What about logical partition?
    4. Finally, what are the underlying criteria to decide on logical or physical partitioning or both
    Thanks

    . Can you direct me on how to “control this” as the author indicated?
    You make this setting RSRT.
    2. Also, the author noted that
    “… Physical Partitioning - I believe only Oracle and Informix currently support Range partitioning. …”
    DB2 also support partiioning. Also the current relese of SQL server support partitioning.
    b) Where do we indicate physical or logical partitioning as an option in BW?
    Physical parittions are set up in the cube change option. When you are in  the cube change mode, on the themenu, chose extras - performance / DB parameters - parttitios.
    Now a screen will pop giving the time characteristics in the cube and choose the characteristic (s) that you wish to chose and confirm the entries - then you will get another small pop up where you set the no of parittions.
    Also, pl note that you can partition only on fiscalyear and fiscal period and not on other time characteritsitcs.
    Logical partitions: Logical paritions are nothinb but spilting the cube into numerous cubes of smaller sizes. You combine all these cubes by means of multi provider. For example, if you have 1000 cost centers , you may want to split into cubes based on the cost cenree numbers and combine them into a multi provider.
    No more setting is required.
    c) Or, when we talk about dimension table, etc. is there always an underlining database such as Oravle, Infomix, etc in BW? If so, what does BW use?
    Dimension tables / fact tables/ ODS tables /master data tables are all database tables. Which ever database you use and when yu activate these objects, the tables are created in the underlying database.
    3. For physical partitions, I read that the cube needs to be empty before it can be partitioned. What about logical partition?
    Logical partiton can be done any time.
    4. Finally, what are the underlying criteria to decide on logical or physical partitioning or both
    The underlying criteria is facotrs such as :
    (a) the no of years of history tou wish to view in reports.
    (b) te no ofyears you will hold the data in BW before archiving.
    (c) othe performance matters related tro sizing.
    Ravi Thothadri

  • ODS to CUBE loading - taking too much time

    Hi Experts,
    I am loading data from R/3(4.7) to BW (3.5).
    I am loading with option --> PSA and then Data Target (ODS ).
    I have a selection criteria in Infopackage while loading from standard Datasource to ODS.
    It takes me 20 mins to load 300K records.
    But, from ODS to Infocube ( update method: Data Target Only), it is taking 8 hours.
    The data packet size in Infopackage is 20,000 ( same for ODS and Infocube).
    I also tried changing the data packet size, tried with full load , load with initialization,..
    I tried scheduling it as a background job too.
    I do not have any selection criteria in the infopackage from ODS to Cube.
    Please let me know how can I decrease this loading time from ODS to Infocube.

    Hi,
    To improve the data load performance
    1. If they are full loads then try to see if you make them delta loads.
    2. Check if there are complex routines/transformations being performed in any layer. In that case see if you can optimize those codes with the help of an abaper.
    3. Ensure that you are following the standard procedures in the chain like deleting Indices/secondary Indices before loading etc.
    4. Check whether the system processes are free when this load is running
    5. try making the load as parallel as possible if the load is happening serially. Remove PSA if not needed.
    6. When the load is not getiing processed due to huge volume of data, or more number of records per data packet, Please try the below option.
    1) Reduce the IDOC size to 8000 and number of data packets per IDOC as 10. This can be done in info package settings.
    2) Run the load only to PSA.
    3) Once the load is succesfull , then push the data to targets.
    In this way you can overcome this issue.
    Ensure the data packet sizing and also the number range buffering, PSA Partition size, upload sequence i.e, always load master data first, perform change run and then transaction data loads.
    Check this doc on BW data load perfomance optimization
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    BI Performance Tuning
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
    Thanks,
    JituK

  • Logical partitioning, pass-through layer, query pruning

    Hi,
    I am dealing with performance guidelines for BW and encountered few interesting topics, which however I do not fully undestand.
    1. Mainetance of logical partitioning.
    Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
    2.Pass- though layer.
    There are very few information about this basic concept.  Anyway:
    - is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
    3. Query pruning
    Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
    4. DSOs for master data loads
    What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
    Thanks,
    Marcin

    1. Mainetance of logical partitioning.
    Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
    Logical partitioning is when you have separate ODS / Cubes for separate Years etc ....
    There is no automated way - however if you want to you can physically partition the cubes using time periods and extend them regularly using the repartitioning options provided.
    2.Pass- though layer.
    There are very few information about this basic concept. Anyway:
    - is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
    Usually a pass through layer is used to
    1. Ensure data consistency
    2. Possibly use Deltas
    3. Additional transformations
    In a write optimized DSo - the request ID is key and hence delta is based on request ID. If you do not have any additional transformations - then a Write optimized DSO is essentially like your PSA.
    3. Query pruning
    Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
    The query pruning - depends on the run based and cost based optimizers within the DB and not much control over how well you can influence the execution of a query other than havin up to date statistics , building aggregates etc etc.
    4. DSOs for master data loads
    What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
    It depends more on the data volumes and also the number of transformation required...
    If you have multiple levels of transformations - use a DSO or if you have very high data volumes and want to identify changed records - then use a DSO.

Maybe you are looking for