Transparent partition and Dynamic Calculation

I've setup a target cube which collect the results from two different source cubes: one for historical data, the other one for the current period. I've put formulas on accounts in the target cube. Those formulas refer to both historical and current periods. Those accounts are also set in the source cubes as dynamic calculated. But even by setting them as stored members, the target dynamic calculated members do not take the precedence. How to solve this ? <BR><BR>Stéphane

<blockquote>quote:<br><hr><i>Originally posted by: <b>stephane.huber</b></i><BR>I've setup a target cube which collect the results from two different source cubes: one for historical data, the other one for the current period. I've put formulas on accounts in the target cube. Those formulas refer to both historical and current periods. Those accounts are also set in the source cubes as dynamic calculated. But even by setting them as stored members, the target dynamic calculated members do not take the precedence. How to solve this ? <BR>Stéphane<hr></blockquote><BR><BR>Stéphane,<BR>The quickest solution is to modify your mapping so that all of the dynamic calcs are in the target partition. In other words, limit the scope of the accounts dimension in the partition so that none the dynamic calcs are in the source of the partition. This means making a subset of the accounts dimension that includes only those accounts with data and not those that are dynamically calculated.<BR><BR>If you have an accounts dimension with three members: COGS (loaded), Sales(Loaded), and Margin (dynamic calc: Sales - COGS), the source partition should only include COGS and Sales, not Margin, allowing the margin calculation to occur in the target cube.

Similar Messages

  • Data not replicated with partition and dynamic calc

    HI
    I am using Essbase 11.1.2.1.
    I want to use a replicated pratition wetween two Essbase BSO applications.
    But in my source application, I have to send an account ACCOUNT (level 1) with several stored members and 1 dynamic calc member with formula. Due to this Dynamic calc account, the partition doesn't work (data are not replicated).
    How could I do?
    Thanks
    Fanny

    You're right, dynamic calc account are not issues. I am using them in partition.
    But py issue is there :
    The source dimension is like this :
    TAX (Dynamic Calc)
    - 6xxxxxxx
    - 6yyyyyyy
    - 6zzzzzzz (Dynamic Calc) = formula
    The target dimension is like this :
    TAX
    And so I need to send TAX (Source) to TAX (cible).
    I don't think it is possible, but I would like a confirmation before using something else.
    I hope it is clearer now.
    thanks
    Fanny

  • Transaction or report for showing partitions and total amount in BW 3.5?

    Hello,
    I want to fulfill a unicode conversion of a BW 3.5 system and found out, that this system has some tables with plenty of table partitions (700-1200). To calculate the table splitting it would be very helpful if there is a transaction or report where I can enter the name of a table and the result will be the amount of partitions and the calculated sum of all partitions including the header table.
    Does this exist in a SAP BW 3.5 system?!
    If not, are there useful alternatives?
    Thanks in advance

    Hey you two,
    Yes, I know already DB02 and I used it to show a list with object type "TABLE PART*". But then, I get a list of ca. 4900 table partitions. I just want to have a list looking like:
    /BIC/B000637000      155 Partitions   106,0 Gb Whole Size
    I don't want to know the partitions themselves, but the amount and the header table - and of course the whole size. Is that also possible with DB02?

  • Dynamic calculation with transparent partition

    suppose i have 2 transparent partitions, one point to A essbase, to get base data, the other point to B essbase,
    to get adjustment data, can i setup a dynamic formula in target essbase like this: total=base+adjustment?

    You would have to do it the same way you would if the data was all in one cube. You will have to have either a dimension or member rollup with "after Adjustment" = "Before Adjustment" + "Adjustment".
    All the partioning does is get the adjustment data from your "B" cube - it still must have its own "home" in the top cube where the dynamic calc happens.
    To put it another way every target cell in the target cube must come from one and only one source cell - no addition of multiple sources as it flies throught the transparent partition definition. (note that a source cell can go to multiple target cells).

  • DynCalc And Transparent Partitions

    Following Problem:We have a big cube (13 dimensions!!!)covering 5 years of data, which i build up every month (for performance reasons). The 5 years are members of a separate (sparse) dimension.I also have a scenario dimension (sparse) including members like actual, forecast, LastYear (DynCalc), variance (Dyncalc),...Now I think of splitting the cube in 5 single cubes, one for each year (including the scenarios actual and forecast) that should serve as the data sources in a transparent partition.The data target should be a cube with all 13 dimensions (including all years!) and all scenarios (incl. the dynamically calculated ones). Unfortunately a retrieve on e.g. the member "lastyear" (DynCalc) shows a very low performance!Any ideas?(BTW: all 13 dimensions are necessary....)

    Depending on how much physical memory you have, bump up your Index Cache. You will also want to bump up your CALCCACHEHIGH to 199999999. Refer to this setting in all your memory intensive calc scripts. Unfortunately, using transparent partitioning adds a layer of overhead to data retrieves. If you have ample storage space, try changing the member option to dynamic calc and store. This will help out the performance for next user who retrieves this data. I have a very similar model. Less dimensions but some huge hierarchies.

  • ASO and BSO Transparent partition

    Hi
    I have couple of questions regarding transparent partition, please help me out,
    1) I have two ASO cubes and one empty BSO cube, do I need to run security (apply filters) on both ASO and BSO cubes?
    2) How should I give access to users? For example: what should I write as "..."identified by" ....
    TO 'VZB_ISu'.'VB_IS' AT "localhost"
    AS "....." IDENTIFIED BY "....."
    AREA '"2006","2005",@IDESC("Month"),@IDESC("Scenario"),@IDESC("Measure_Type"),@IDESC("Measures"),@IDESC("Accounts"),@IDESC("RegionCompany"),@IDESC("Company"),@IDESC("RegionOrganization"),@IDESC("Organization"),@IDESC("Affiliate"),@IDESC("Product"),@IDESC("Ledger_Code"),@IDESC("Region"),@IDESC("Country")';
    3) Are their specific ways to tune an empty BSO cube transparently linked to ASO cube? BSO cube has no data.
    Thank you
    Namita

    Hi,
    1) You only need to set security on the BSO db (I'm assuming the users will not be connecting directly to the ASO db). When you setup your partition you define what user will be used to login to the ASO database. When ever an end-user retrieves data over the partition it's the partition user that logs into the ASO database to retrieve the data. You only need to make sure that the user defined in the partition has sufficient rights to retrieve that data.
    2) You can either use EAS to create the filter or MaxL...this is the example for create/replace filter: create filter sample.basic.filt1 read on 'Jan, sales', no_access on '@CHILDREN(Qtr2)';
    3) Try an avoid too many dynamic calcs in the target. These will obviously affect the retrieval times.
    Gee

  • MAXL and Transparent Partition

    Hello,
    I am actually working on the Hyperion Essbase EPM 11.1.2
    I am trying to create a transparent partition using a MaxL command:
    When I create manually this partition (using EAS): This is validated and data are copied.
    When I run the MaxL: The partition, appear but it seems to be broken, I can only repaire or delete the new partition.
    If I try to repair it (in EAS), It does not work. I don’t manage to link my partition to source and target database.
    I have used with two different users : Admin (native directory) and another user (admin access).
    Thanks for your answer,
    Arthur.

    Hi,
    What the error message you are getting.
    When you create a partition via Mxl editor it will open the interface to create the partiton. Check this Mxl with your mxl.
    Thanks,

  • MDX / BSO calculations with transparent partitions - Recursive reference

    Hi Everyone,
    I am using Essbase v11.1.2 and have an empty ASO cube that has transparent partitions to several ASO source cubes. However, I would like to use MDX in the target ASO cube to pull data from one cube, but make it available to all queries from other cubes.
    Eg. Empty Target ASO cube TrgtASO has VOLUMEmetric and versions Ver1, Ver2 and Ver3. I have 3 ASO source cubes for each version.
    SrcASO1 is only cube with volume data
    SrcASO2
    SrcASO3
    I had created a metric called "Vol-All versions" in the TrgtASO cube and tried to use MDX fornula ([Version].[Ver1],[Measures].[VOLUME]) so that no matter what version is selected, the data will always be from SrcASO1.
    It works if I have all data in one cube and load data only into Ver1. However, for performance and query reasons, I need to put each version into it's own cube.
    My question is do transparent partitions allow recursive retrieval of data?
    I.e. If I log into TrgtASO cube and pull metric "Vol-All versions" but have either Ver2 or Ver3 as the version, is there an MDX formula that will pull the volume from Ver1 cube?
    Any assistance would be greatly appreciated.

  • Hide partition and attribute dynamically

    Hello everyone,
    We can hide partition & attributes manually by using AttributeHierarchyVisibility Property from False.
    but can we hide these all at process time or dynamically.
    Thanks in advance.
    Regards,
    Jvora

    Hi jvora,
    According to your description, you want to dynamically hide partition and attributes. Right?
    In Analysis Services, the only way to hide the attributes is setting the AttributeHierarchyVisibility to false. It's not supported to apply any condition when setting this property. Also for processing dimensions and partitions, it's not supported to
    skip attributes during processing. You requirement can't be achieved.
    Reference:
    Hiding and Disabling Attribute Hierarchies
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Issue in ASO to BSO TRANSPARENT PARTITION

    Hi Techies,
    We are facing issue in ASO to BSO Transparent Partition. Please see below details and suggest.
    1. we are having a ASO cube transparently partitioned on to BSO cube.
    2. In ASO cube, we have a dimension with 3 hierarchies under it which are strored hierarchies. when we retrieve data for this hierarchies from BSO cube we are getting data.
    3. Now we were asked to add one more stored hierarchy under the same dimension, we built the dimension and built aggregations based on the queries. After creating the partition we are not able to retrieve the data from BSO cube for the newly added stored hierarchy everything else is working fine as before. The following is the error am getting when I tried to retrieve data for the newly built stored hierarchy.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    I increased CALCLOCKBLOCK in .cfg file , increased data cache size and bounced the server. Even now am getting the same error. Please suggest what else we can do with this kind of error

    Did the number of blocks that couldn't be locked decrease when you increased the data cache if so try increasing the data cache some more, make sure you restart the db after each change.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Calculate cubes using transparent partitions

    hello,since I have had a transparent partition to my cube (my cube is the target) the calculation duration increases dramaticaly although the calc script do not fix on the members used in the partition.Have you already met this problem? Is it possible to define particular parameters to avoid this increase?many thanks for your help!

    I have hesitated to suggest this because it involves changing more than your environment and calc, and it may not even be appropriate for you, but it would definitely help if you can do this.Instead of having 2 cubes, use 3 (ouch). The current cube becomes a facade to the two source cubes that contain data, call this CUBE_Facade, and also copy it to CUBE_SRC_Current (dropping the members mapped to CUBE_Source), and then CUBE_Source becomes CUBE_SRC_History. You should get the idea from this layout.Now for the calc. You wouldn't perform this on CUBE_Facade, but directly on CUBE_SRC_Current. If you further want to ensure that users don't interfere, you could temporarily drop the partition, but for a 3 minute calc I doubt this is a concern for you (if it is, you should also know doing it this way is a lot easier and cleaner than trying to kick users off and disabling logins). Security would also be associated with the facade, but it doesn't hurt to set it on all cubes (the users would have the ability if not the knowledge to connect to the source cubes, after all).Aside from the approach above, I can only surmise that perhaps the fix that limits your calc to the current portion (time/scenario?) doesn't account for variance formula's (even if dynamically calculated?). A solution for this may exist be making them stored (another ouch), and calculating them in a script instead of with a formula? I'm only guessing that this might help, or that the situation even applies.

  • Data Recovery from Partitioned and formatted Bit Locker Encrypted Drive

    Recently because of some issues in windows 7 installation from windows 8 installed OS. it was giving as the disc is dynamic windows can not be installed on it. so at last after struggling hard no other solution i partitioned and formatted my whole
    drive so all data gone included the drive which was encrypted by bit lockers.
    For recovery i used many software such as ontrack easy recover, get data back, recovery my files professional edition but still i couldnt able to recover my data from that drive. then i found some suggestion Using CMD to decrypt my data first 
    http://technet.microsoft.com/en-us/library/ee523219(WS.10).aspx
    where it shows it successfully decrypt my data at that moment my drives were in RAW format excluding on which windows is installed and then in CMD i check Chdsk which also shows no problem found. but now problem is still i coudnt able to recover
    my data then i format the drive D and again tried to recover data using above software after decryption still no result. 
    Now i need assistance how i can recover my encrypted drive as it was partitioned and also formatted but decrypted also as i have its recovery key too. thanks

    Hi ,
    I am afraid that we cannot get the data back if the drive has been formatted even if use the
    BitLocker Repair Tool.
    You’d better contact your local data recovery center to try to get data back.
    Tracy Cai
    TechNet Community Support

  • Pivot and dynamic SQL

    Hi Team,
    I need to write a SQL to cater the requirements. Below is my requirements:
    pagename fieldname fieldvalue account_number consumerID
    AFAccountUpdate ArrangementsBroken dfsdff 1234 1234
    AFAccountUpdate ArrangementsBroken1 dfsdff 1234 1234
    AFAccountUpdate ArrangementsBroken2 dfsdff 1234 1234
    AFAccountUpdate ArrangementsBroken2 dfsdff 12345 12345
    AFAccountUpdate ArrangementsBroken1 addf 12345 12345
    Create table test_pivot_dynamic
    pagename varchar(200),
    fieldname Varchar(200),
    fieldvalue varchar(500),
    N9_Router_Account_Number bigint,
    TC_Debt_Item_Reference bigint
    --Input
    insert into test_pivot_dynamic Values('AFAccountUpdate','ArrangementsBroken','addf',1234,1234)
    insert into test_pivot_dynamic Values('AFAccountUpdate','ArrangementsBroken1','dfsdff',1234,1234)
    insert into test_pivot_dynamic Values('AFAccountUpdate','ArrangementsBroken2','fder',1234,1234)
    insert into test_pivot_dynamic Values('AFAccountUpdate','ArrangementsBroken2','dfdfs',12345,12345)
    insert into test_pivot_dynamic Values('AFAccountUpdate','ArrangementsBroken1','dfdwe',12345,12345)
    insert into test_pivot_dynamic Values('AFAccountUpdate1','Arrangements','addf',1234,1234)
    insert into test_pivot_dynamic Values('AFAccountUpdate1','Test1','dfsdff',1234,1234)
    --Expected output:
    Select 1234,1234,'AFAccountUpdate','ArrangementsBroken','addf','ArrangementsBroken1','dfsdff','ArrangementsBroken2','fder','ArrangementsBroken2','fder'
    Select 12345,12345,'AFAccountUpdate','ArrangementsBroken','addf','ArrangementsBroken1','dfdwe','ArrangementsBroken2','dfdfs'
    Select 1234,1234,'AFAccountUpdate1','Arrangements','addf','Test1','dfsdff'
    so basically we have to pivot and dynamic sql and insert the expected output to a common table which will have all the required fields
    Thanks,Ram.
    Please don't forget to Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful. It will helpful to other users.

    This should give you what you're looking for
    SELECT N9_Router_Account_Number,TC_Debt_Item_Reference,PageName,
    MAX(CASE WHEN SEQ = 1 THEN fieldname END) AS fieldname1,
    MAX(CASE WHEN SEQ = 1 THEN fieldvalue END) AS fieldvalue1,
    MAX(CASE WHEN SEQ = 2 THEN fieldname END) AS fieldname2,
    MAX(CASE WHEN SEQ = 2 THEN fieldvalue END) AS fieldvalue2,
    MAX(CASE WHEN SEQ = 3 THEN fieldname END) AS fieldname3,
    MAX(CASE WHEN SEQ = 3 THEN fieldvalue END) AS fieldvalue3,
    MAX(CASE WHEN SEQ = 4 THEN fieldname END) AS fieldname4,
    MAX(CASE WHEN SEQ = 4 THEN fieldvalue END) AS fieldvalue4
    FROM
    SELECT *,ROW_NUMBER() OVER (PARTITION BY N9_Router_Account_Number,TC_Debt_Item_Reference,PageName ORDER BY PageName) AS SEQ,*
    FROM test_pivot_dynamic
    )t
    GROUP BY N9_Router_Account_Number,TC_Debt_Item_Reference,PageName
    To make it dynamic see
     http://www.beyondrelational.com/modules/2/blogs/70/posts/10791/dynamic-crosstab-with-multiple-pivot-columns.aspx
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • User View is not reflecting the source data - Transparent Partition

    We have a transparent partition cubes. We recently added New fiscal year details to the cube (user view as well as source data cube). We loaded the data to the source data cube. From the user view, we tried to retrieve data, it shows up 0's. but the data is availble in the source data cube. Could anyone please provide the information what might be the issue?
    Thanks!

    Hi-
    If u haven't add the new member in the partition area, then Madhvaneni's advice is the one u should do. Because, if u haven't add the member, the target can't read the source.
    If u have already added the new member in the partition area, and still the data won't show up, sometimes it's worth to try re-save the partition, and see what's the outcome.
    -Will

  • Where did the Partitions and SubPartitions go?

    I created a table with partition Range (Transaction_Date, Retention_Period) and hash (Record_Id) subpartition template (for 32 subpartitions)
    Then I add more partitions and while the loop is going on I can get counts of partitions and subpartitions. The job finished and my log table shows about 1800 partitions added and there should be 32 subpartitions for each of the partitions. However, user_tab_partitions shows zero records for the table, and user_tab_subpartitions also show zero record. After a few minutes the partitions show up but no subpartitions. The indexes on the table have also disappeared (one local and one global)
    Any explanation for this behaviour?
    Working on Exadata 11.2.0.3
    Querying
    USER_TABLES
    USER_TAB_PARTITIONS
    USER_TAB_SUBPARTITIONS
    USER_INDEXES

    >
    Step 1. Create Table xyz (c1 date, c2 integer c3 integer, etc)
    partition by range (c1,c2)
    subpartition template (s01, s02... s32)
    create index i1 on xyz (c1,c2,c3) local;
    Then, since I want to create about 1800 partitions I have a procedure that has a "loop around" ALTER TABLE add Partition .. until all the partitions are created. This is the "Job" which while running I query USER_TAB_PARTITIONS and USER_TAB_SUBPARTITIONS to see how things are progressing. And Yes ALTER Table has no progressing to verify.
    So al the partitions get created. No errors from the procedure to go through creating all the partitions. So I would expect that at the end I should get to see all the new partitions for the Table. Instead I get "no records" from USER_TAB_PARTITIONS and USER_TAB_SUBPARTITIONS.
    I am also aware that "ALTER TABLE ADD PARTITION .." cannot make indexes go away. However, if the query on USER_INDEXES returns nothing, what happend to the Index created before the partitions were added?
    I am not using DBMS_REDEFINITION. The only procedure is to add partitions one at a time for each date for 3 years. If you have a better way than a procedure please advise accordingly.
    >
    In order to help you the first step is to understand what problem you are dealing with. Then comes trying to determine what options are available for addressing the problem. There are too many times , and yours may, or may not, be another one, where people seem to have settled on a solution before they have really identified the problem.
    Anytime someone mentions the use of dynamic SQL it raises a red flag. And when that use is for DDL, rather than DMl, it raises a REALLY BIG red flag.
    Schema objects need to be managed properly and the DDL that creates them needs to be properly written and kept in some sort of version control.
    Scripts and procedures that use dynamic SQL are more properly used to create DDL, not to execute it. That is, rather than use a procedure to dynamically create or alter a table you would use the procedure to dynamically create a DDL script that would create or alter the table.
    Let's assume that you know for certain that your table really needs to have 1800 partitions, be subpartitioned the way you say and have partition and subpartitions names that you assign. Well, that would be a pain to hand-write 1800 partition definitions.
    So you would create a procedure that would produce a CREATE TABLE script that had the proper clauses and syntax to specify those 1800 partitions. Your 'loop' would not EXECUTE an ALTER TABLE for each partition but would create the partition specification and modify the partition boundaries for each iteration through the loop. Sort of like
    for i from 1 to 365 loop
        add partition spec for startDate + i
    end loop;The number of iterations would be a parameter and you would start with 2 or 3. Always test with the smallest code that will produce the correct results. If the code works for 3 days it will work for any larger reasonable number.
    Then you would save that script in your version control system and run it to create the table. There would be nothing to monitor since there is just one script and when it is done it is done.
    That would be a proper use of dynamic sql: to produce DDL, not to execute it.
    Back to your issue. If I were your manager then based on what you posted I would expect you to already have
    1. a requirements document that stated the problem (e.g. performance, data management) that was being addressed
    2. test results that showed that your proposed solution (a table partitioned the way you posted) solves the problem
    The requirements doc would have detail about what the performance/management issues are and what impact they are having
    You also need to document what the possible solutions are, the relative merits of each solution and the factors you considered when ranking the solutions. That is, why is your particular partitioning scheme the best solution for the problem.
    You should have test results that show the execution plans and performance you achieved by using a test version of your proposed table and indexes.
    Until you have 'proven' that your solution will work as you expect I wouldn't recommend implementing the full-blown version of it.
    1. Create a table MANUALLY that has 2 or three days worth of partitions.
    2. Load those partitions with a representative amount of data
    3. Execute test queries to query data from one of those partitions
    4. Execute the same test queries against your current table
    5. Capture the execution plans (the actual ones) for those queries. Verify that you are getting the performance improvements that you expected.
    Once ALL of that prep work is done and you have concluded that your table/index design is correct then go back to work on writing a script/procedure that will produce (not execute) DDL to produce the main table and partitioning you designed.
    Just an aside on what you posted. The indexes should be created AFTER the table and its partitions are created. If you are creating your local index first, as you post suggests, you are forcing Oracle to revamp it 1800 times when each partition is added. Just create the index after the table.
    p.s. the number of posts anyone has is irrevelant. The only thing that matters is whether the advice or suggestions they provide is helpful. And the helpfullness of those is limited to, and based on, ONLY the information a poster provides. For exampe, your proposed partitioning scheme might be perfectly appropriate for your use case or it could be totally inappropriate. We have no way of knowing without knowing WHY you chose that scheme.
    But I haven't seen one like that so it makes me suspicious that you really need to get that complicated.

Maybe you are looking for