Dynamic Partitions + Aggregation Designs. Est rows. How to handle/does it matter?

Hi all.
I've created a 'template' partition with an aggregation design in my measure group (the slicer criteria is where OrderDate/100 = null so as to have 0 rows in the partition).
I use the create XMLA from this partition (inc specifying the aggregation design) to dynamically create a new partition using SSIS based on the yyyymm of the incoming facts. This all works fine.
I have some reservations about the metadata that i'm passing as part of the create partition xmla. Do I need to specify accurate Estimated Rows? Are these actually used by the engine similar to the distribution statistics on the db engine?
And what about the est rows in the aggregation design? I cross joined factinternetsales in adventureworks a few times to get 200mil rows to test my dynamic partition creation ETL. But when i look @ the est rows in the agg design it shows 156 (this wouldve
been the orig amount when i built it using the wizard). Does this matter? or do i need to maintain the full rowcount in the aggregation design specification as well? And how do I do it?
Jakub @ Adelaide, Australia

bump.
I'm working on something similar again and I never got an answer to this question. Are the est row counts in the partition and aggregation XMLA of any use?
I've also noticed that the aggregation design against my dynamically partitioned measure group shows an estimated performance gain of 0%. Again, does this matter or is it just for informational purposed only
Jakub @ Adelaide, Australia

Similar Messages

  • Backing up partitions to an external HD, how many images does it make?

    I have several partitions on my MBP of which I would like to backup only 2. I know I can control this in the TM setup. However my question is would TM than make one or two sparse images. And if only one how could I get it to make two images.
    Or? Any insights or tips about this?
    <Edited by Moderator>

    Yes you can. But i would suggest to do each one at a time. (Is just for performance, if you try to write from two computers on the same drive at the same time access time will be reduced to less than one half).
    Regards,
    Elrick

  • SSAS 2008R2: Dynamic partitioning via SSIS - is this logic correct?

    Hi all,
    I'm setting up dynamic partitioning on a large financial cube and implementing a 36month sliding window for the data. Just want to make sure that my logic is correct.
    Basically, is doing a process update of all the dims then a process default of my facts (after i've run the xmla to add/remove partitions) enough to have a fully processed (and performant/aggregated) and accurate cube?
    Assume I have a fact that has a 'reporting month', 'location key' and then numerous measures and dim keys. It holds the revenue for that location for the reporting month.
    The reporting month can never be backdated. subsequent runs can only overwrite the current reporting month or add the next month.
    Assume the data warehouse has been loaded successfully. The warehouse holds a 72month rolling history.
    Now, to the dynamic partitioning. The fact is partitioned by reporting month and has aggregation designs.
    My SSIS package initially does a process update on all the dimensions. My understanding is that this 'flags' which existing measure partitions need to be reindexed.
    Then in my data flow:
    I run a simple query over my fact (select 'my partition ' + str(billmonth,6) AS PartitionName, count(*) as EstCount from myFact where billmonth > 36months ago group by billmonth order by PartitionName) to get a list of all the partitions that exist
    in the data warehouse and that should be in the cube.
    I do a full outer merge on the partition name with the equivalent of that but from my cube. I use a script component as a source with the following code:
    AMO.Server amoServer;
    AMO.MeasureGroup amoMeasureGroup;
    public override void PreExecute()
    base.PreExecute();
    amoServer = new AMO.Server();
    amoServer.Connect(Connections.Cube.ConnectionString);
    amoMeasureGroup = amoServer.Databases.FindByName(amoServer.ConnectionInfo.Catalog.ToString()).Cubes.FindByName(Variables.CubeName.ToString()).MeasureGroups.FindByName(Variables.MeasureGroupName.ToString());
    amoServer.CaptureXml = true;
    public override void PostExecute()
    base.PostExecute();
    amoServer.Dispose();
    public override void CreateNewOutputRows()
    try
    foreach (AMO.Partition OLAPPartition in amoMeasureGroup.Partitions)
    Output0Buffer.AddRow();
    Output0Buffer.PartitionName = OLAPPartition.Name;
    catch(Exception e)
    bool Error = true;
    this.ComponentMetaData.FireError(-1, this.ComponentMetaData.Name, String.Format("The measure group {0} could not be found. " + e.ToString(),Variables.MeasureGroupName.ToString()), "", 0, out Error);
    throw;
    (not a c# coder, above stolen + butchered from elsewhere.. but it seems to work)
    I use a conditional split to separate the rows where datawarehouse.PartitionName is null (generate XMLA to delete from cube) and cube.PartitionName is null (generate xmla to add to cube). I dont do anything with partitions that exist in both the cube and
    data warehouse.
    I then perform a process default of the measure group.
    I'm assuming this will do a 'process full' of the new unprocessed partitions, and that it'll do a process data + process index of any partitions that were modified by the dimensions' process update. Is this correct? Or do I need to do any other explicit
    processing of my measure groups to make sure my facts are 100% accurate?
    Thanks.
    Jakub @ Adelaide, Australia

    cheers, i'll switch it to use getbyname instead
    The reprocessing of the current month includes steps in the SSIS package flow that explicitly remove the data from the relational data warehouse(delete from) and the cube (XMLA delete statement against partitions)
    Yes, I do have other measures groups in the cube.
    I have five measure groups in total. Three are dynamically partitioned while the other two have a single partition.
    What will happen to new data in the two single partition measure groups? I did some further reading and now my understanding is that a process default might not process the aggregates if there are no dimension changes and new fact data arrives.
    I'm now thinking of making my data flow:
    1. execute dynamic partitioning XMLA
    2. process update dimensions with affected objects included (this'll reprocess existing dynamic partitions that are modified by any dim changes)
    3. process default 3 dynamically partitioned measure groups (this'll process any newly added dynamic partitions)
    4. process full 2 single partition measure groups - this step might redo some of the work done in step 2., but the measure groups are only a few million rows in one case and a few hundred in the other with minimal growth expected. And what you just said about
    changing data made me realise I have the sliding window in these last two implemented via the source script, so I need to do a process full here anyway.
    Jakub @ Adelaide, Australia

  • How to handle multiple datasources in a web application?

    I have a J2EE Web application with Servlets and Java ServerPages. Beside this I have a in-house developed API for certain services built using Hibernate and Spring with POJO's and some EJB.
    There are 8 databases which will be used by the web application. I have heard that multiple datasources with Spring is hard to design around. Considering that I have no choice not to use Spring or Hibernate as the API's are using it.
    Anyone have a good design spesification for how to handle multiple datasources. The datasource(database) will be chosen by the user in the web application.

    Let me get this straight. You have a web application that uses spring framework and hibernate to access the database. You want the user to be able to select the database that he wants to access using spring and hibernate.
    Hopefully you are using the Spring Framework Hibernate DAO. I know you can have more that one spring application context. You can then trying to load a seperate spring application context for each database. Each application context would have it's own configuration files with the connection parameters for each datasource. You could still use JNDi entries in the web.xml for each datasource.
    Then you would need a service locater so that when a user selected a datasource he would get the application context for that datasource which he would use for the rest of his session.
    I think it is doable. It means a long load time. And you'll need to keep the application contexts as small as possible to conserve resources.

  • How to set "Avoid Aggregation on Duplication Rows" Checked by default?

    When users create new rows, How can we set the avoid Aggregation on duplicate rows set checked on ? Is there any parameter setting available in config files?
    thanks
    Raj

    Hi,
    To be fair, it's a pretty odd question in the first place.
    If you have an enhancement request, please feel free to submit it on the "ideas place" for webI: https://cw.sdn.sap.com/cw/community/ideas/businessanalytics/sbowebi
    regards,
    H

  • How to handle the dynamic rows in pdf table

    Dear All,
    earlier i posted one thread reagarding getting pdf table data
    [facing problem while getting interactive form table data;
    this is working fine, i sued bind_table in wddoinit, but here i am fixing the rows count and bind_table.
    for example i have initially taken row count as 3 then i want to increase the rows in pdf table.
    i know we can use the formcalc to increase the rows by taking a button in pdf layout.
    this also working, but the data is not picking for newly added rows, i hope the problem is because the table node in the context is not binded for this new rows.
    even i tried using with webdynpro native button controlls still not working.
    any one help me what exactly this bind_table is doing and how to handle this code in form calc.
    since my table is pdf table.
    Thanks,
    Mahesh.Gattu

    Hi Thomas,
    Thanks for your confirmation,
    i have checked the paramets of submit button we have only wdevent parameters they are
    CL_WD_CUSTOM_EVENT
              PARAMETERS - Hashed table having 2 columns
              ID     ->IF_TDS (Interactive Form element Name)
             CONTEXT_ELEMENT     ->->
    these are same in case of submit button1 and submit button 2.
              NAME - Name of the Button Event i.e ON_SUBMIT (  this is also same in both the buttons).
    so i think it is not possible to work with multiple buttons by assinging to multiple tables on form.
    The other option is to place the buttons outside of the forum in the surrounding WDA area.
    This way you have no problem handling the events.
    in wddoinit if i use bind_table with 5 rows form table is populating with 5 rows, but when i take a button
    out side the form and use bind_table by incremeting the rows the pdf table rows are not adding, if i enter
    some thing on pdf table then click on add row button then the table rows are getting add. if i don't do any
    action on pdf table and click on add row button pdf is not getting update.
    is there any issue.. if i don't keep cursor on table and type some thing.. the add button is not updating
    the rows.
    if i take a  button on wd view i.e outside the form and use bind table i shall add the rows.. but in case of remove rows how to do.
    in case of normal table we can use Remove_Element( ) but how can i know the selected row from the pdf table, please help me in this concern also.
    Regards,
    Mahesh.Gattu
    Edited by: Maheshkumar gattu on Jan 7, 2009 3:57 PM
    Edited by: Maheshkumar gattu on Jan 7, 2009 4:03 PM
    Edited by: Maheshkumar gattu on Jan 7, 2009 5:21 PM

  • How to display dynamic values in poplist at row level in advanced table

    I want to display dynamic values in poplist at row level based on a row value in advanced table, with lov i can achieve it, is there any way to achieve this in poplist
    Thanks
    Bbau

    Babu,
    You have been long enough in forum and still come out with these one liners. Problem statement is not clear.
    --Shiv                                                                                                                                                                                                                                                               

  • Dynamic partitioning in 10g -

    Hi,
    I am on 10g and need to implement dynamic partitioning.
    In a table based on a column I need to implement dynamic partitioning .
    id
    name
    value
    1
    name1
    val1
    2
    name1
    val2
    3
    name2
    val3
    4
    name2
    val4
    5
    name2
    val5
    From the above table for each name uniquely (i.e for name1 ,name2 ) different partitions have to be created.
    If a name3 is added a new partition has to be created dynamically.
    If the column value is deleted (if name1 is deleted) the relevant partition should also be deleted
    Please suggest on how to achieve  this .
    Thanks

    Dynamic partitioning isn't available in Oracle 10g. It's there starting with 11g with the name 'Interval Partitioning'.
    If a name3 is added a new partition has to be created dynamically.
    If you are on 10g, the only option you have is to create partitions ahead of time. Oracle won't create it for you dynamically.
    If the column value is deleted (if name1 is deleted) the relevant partition should also be deleted
    This too would have to be done manually. If you delete a particular row or even entire set from the partition, the partition still exists. You need to drop the partition explicitly. It can be truncated as well. And this remains same in 10g and 11g.
    Having said that, partitioning in general a feature of DWH. You truncate entire partition and load it back again. Updates and deletes for a small set or a single row, in general, is not a feature that normally goes with the partitioning.
    Thanks,
    Ishan

  • Dynamic partitioned Hard Driver cannot be accessed when connected by USB 3.0 Enclosure

    Hi Guys,
    Here is my issue.
    There is a 2TB SATA Hard Drive, formatted to NTFS with 64KB chunk size blocks. The partition type is dynamic. 
    I removed the drive from a system and brought it to another location. 
    This time is is connected via USB 3.0 enclosure, which I know it works and it was tried with different hard drives with no issue.
    The Windows is 8.1. It sees the disk but cannot Reactivate it. It shows the reactivate option, but when you select it you see a message:
    Virtual Disk Manager
    This operation is not allowed on the invalid disk pack. 
    OK   
    I understand that USB dynamic disks might not be supported, but the scenario above is valid.
    My question is how to bring the partition online using USB enclosure? 
    Is there a registry workaround or other which involves no data loss? 
    It is very pity not to allow using USB 3.0 when you have a dynamic partition. I do not see any physical limitations to do so.
    Thanks for your help!

    Hi,
    As I known, whether the USB enclosure can identify the dynamic hard drive, it depends on the USB driver and USB specification.
    But to make your computer recognize this hard drive well, we have the workaround to change it to basic disk type:
    http://windowsforum.com/threads/dynamic-disk-invalid.3906/
    This response contains a reference to a third party World Wide Web site. Microsoft is providing this information as a convenience to you. Microsoft does not control these sites and has not tested any software or information found on
    these sites; therefore, Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. There are inherent dangers in the use of any software found on the Internet, and Microsoft cautions
    you to make sure that you completely understand the risk before retrieving any software from the Internet.
    Hope these could be helpful.
    Kate Li
    TechNet Community Support

  • 3.5 Query Designer has 'row' in the footer of results but 7.0 has 'page'

    3.5 Query Designer has 'row' in the footer of results but 7.0 has 'page' is there a way to change 7.0 to have 'row' instead of 'page'? We would prefer to see how many rows of data we have. I didn't know if this was a parameter they may be able to be changed.
    Thanks,
    Diane

    I donot think there is way to display Row instead of Page.
    Also in 7.0. It is based on cells.

  • How to handle dynamic screens in bdc

    HI SIR,
    i am working on BDC for CA02 in this i have probelm that if operation 10 it doesnot contains any items it showing one  screen and if operation 10 contain some items
    and its item counter increasing automatically in this case it coming to this screen through some other screen.so.plz help me how to handle this dynamic screens in bdc ,plz help.

    hi sir,
              now i am working BDC UPLOAD with tcode CA02 in this in second screen their are some rows like in table control
    opt
    10                x
    20
    30                 x
    like above in this if row one of column contains 'x' then if i want to fill sub item then it display i one screen EX 100 else if it dsplay screen 200(ie item 10 already contains subitems it display  x in one row column(i.e selected check box) else it show unselected check box.plz tell how to know wheather check box is selected or not from screen to .plz tell.
    thanking u

  • Optimization Level never rises during aggregation design

    I've recently added a couple of new dimensions to an old SSAS database.   I decided since I added new dimensions I should re-design the aggregations.
    I use the aggregation design wizard, and it acts pretty normal for most of the measure groups.  
    But then I get to the only measure group that connects to my two new dimensions.   This happens to be a very large measure group and it uses 19 different cube dimensions.
    When I run the aggregation design wizard on this group, I choose the "until I click stop" option and let it go, and it starts designing aggregations.   The number of aggregations designed keeps going up, and the storage space allocated
    keeps going up, but the optimization level stays at 0% the whole time.   After a few minutes it gives up at about 200 aggregations and 6 gigs of space used, and still 0% optimization.
    Is there any possible scenario in which this might be expected and normal, or should I be worried?   I've never seen this happen before.
    -Tab Alleman

    Hi Tab,
    If you select the ‘I click Stop’ option and watch the design grow until the estimated size is ridiculously large (maybe over a couple of Gb) you can then get a feeling for how many small aggregations can be built; you can then stop it, reset the aggregations
    and then restart using either the ‘Performance Gain’ or ‘Storage Reaches’ option set to an appropriate level.
    I would suggest you refer to the following articles regarding best practices and effective to design aggregations in SSAS, please see:
    Designing Effective Aggregations in AS2005:
    http://cwebbbi.wordpress.com/2006/10/23/designing-effective-aggregations-in-as2005/
    Aggregation Design Best Practices:
    http://technet.microsoft.com/en-us/library/cc966399.aspx#EBAA
    If you have any feedback on our support, please click
    here.
    Regards,
    Elvis Long
    TechNet Community Support

  • Dynamic string in designer workflow.

    Hi,
    I have creted form library A and one sharepoint list B with email id, name and company name details.
    I want to develop a workflow to send an email to users who are in List B with a link.
    Before sending an email to user, workflow should verify that if there is any item in "Library A" with the name(company name+User name from List B values) or not.
    If there is an item in Library A, then the workflow should send existing item link to that particular user to update the item if not it should send a link to create new report in Library A and the workflow should send emails every week until the
    item's status changes to "Completed".
    Here my question is, how to create dynamic links in designer.Is it accomplishable using sharepoint designer? Client does want custom coding solutions.
    ANy help would be appreciated.
    Thank you.
    AA.

    Hi
    Firstly few things to consider -
    SharePoint workflow are assigned to single item based on create or on modified events of list items
    iterating through list library is not supported by OOTB
    Even if we manage to iterate through list items, it would cause issues when there are too many items in the list
    For creating links you can use workflow variables in SharePoint designer, we normally use current item url from workflow context -
    http://office.microsoft.com/en-us/sharepoint-designer-help/send-e-mail-in-a-workflow-HA010239042.aspx
    some reference for Looping to start with -
    http://sharepointgypsy.blogspot.com/2011/11/create-for-each-loop-for-workflows.html
    http://brianscodingexamples.wordpress.com/2013/05/09/create-while-loop-within-workflow-in-sharepoint-designer-2010/
    http://social.msdn.microsoft.com/Forums/sharepoint/en-US/a9c6ab96-3b7f-428e-be5d-c2323e95cfe4/loop-through-sharepoint-list-having-more-than-100-items-using-ootb-sharepoint-desinger-2010-workflow
    Hope this helps!
    Ram - SharePoint Architect
    Blog - SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

  • Dynamically creating table and inserting rows and columns using JSP

    Hi,
    I'm using mysql and JSP to create web interface for my forms/tables. I wanna create the table dynamically depending on the data in the table and for each particualar record. and these values shud be loaded on the form after loading from the databaes as soon as the form loads.
    also i want to have a button which shud add new rows dynamically. and same one for columns.
    how do i calculate the values across the rows on the forms
    I'm new to JSP. Please point me in the right direction...any tutorials or code will be helpful.
    Any help is appreciated.
    Thanks
    Ayesha

    u write the code in sequence:
    1 write jdbs to select count(* )from table
    2 put in array e.g. String doc_no=new String[count];
    3 write jdbs to select * from table with condition
    //for no. of records just write
    for(int j=0;j<(count);j++){
    <% while(rs4.next()){
         doc_no=rs4.getString(2);
                        date1[i]=rs4.getString(3);
         doc_type[i]=rs4.getString(4);
         location[i]=rs4.getString(5);
         cheque[i]=rs4.getString(6);
         rate[i]=rs4.getInt(7);
         deb_qty[i]=rs4.getInt(8);
         cre_qty[i]=rs4.getInt(9);
         deb_amt[i]=rs4.getInt(10);
         cre_amt[i]=rs4.getInt(11);
         i++;
         //rs4.close();
                   for(int j=0;j<(count);j++){
                   System.out.println("Data count= "+j);
                   %>
    <tr>
    <td width="15%"><font size="1"><%=doc_no[j] %></font></td>
    <td width="10%"><font size="1"><%=date1[j] %></font></td>
    <td width="12%"><font size="1"><%=doc_type[j] %></font></td>
    <td width="9%"><font size="1"><%=location[j] %></font></td>
    <td width="9%">
    <div align="left"><font size="1"><%=cheque[j] %></font></div>
    </td>
    <td width="8%">
    <div align="right"><font size="1"><%=deb_qty[j] %></font></div>
    </td>
    <td width="8%">
    <div align="right"><font size="1"><%=cre_qty[j] %></font></div>
    </td>
    <td width="9%">
    <div align="right"><font size="1"><%=deb_amt[j] %></font></div>
    </td>
    <td width="10%">
    <div align="right"><font size="1"><%=cre_amt[j] %></font></div>
    </td>
    </tr>
    write if there is any specific problem
    bye,
    Samir

  • I have bought and been using the 'Adobe Creative Suite 6 Design Standard'. How do i move this from one laptop, to another laptop?

    I have bought and been using the 'Adobe Creative Suite 6 Design Standard'.
    How do i move this from one laptop, to another laptop?
    I require this for uni, and am struggling to move it across!
    If you can help that would be great

    Hi 7717arrow,
    Please use the below link to download CS6 Design standard on the new machine.
    http://helpx.adobe.com/x-productkb/policy-pricing/cs6-product-downloads.html
    Use the same serial no. to activate the product.
    Thanks

Maybe you are looking for

  • JMS problem with Sun Application Server 8.2

    Hi! I've just started trying JMS and found a problem. I set a connection factory called "QueueConnectionFactory" in the Sun Application Server Admin Consol. After this I test this code: import javax.jms.*; import javax.naming.*; public class Sun_JNDI

  • RoboHelp 8 numbers not appearing as in WYSIWYG

    Since updating to RoboHelp 8.02, the fonts of the numbers in numbered steps are not appearing correctly in the generated CHM. I am attaching 2 images - 1 is of how the numbering appears in the WYSIWYG and 1 of how it appears in the generated CHM. The

  • Passing Project Definition number to a Workflow through milestone

    Hi Experts, I have created a Project in CJ20N and i am triggering a workflow using a milestone when the activity is realesed. The workflow is used to send email. In the workflow in start conditions i have given the BOR for milestone. The workflow is

  • Download previous purchase

    I am looking at iTunes Store now. Song I bought yesterday on another Mac says "Purchased" where it used to say "$1.29". Which is correct. So how do I get this song on to this Mac? Does not show up in "Purchased" tab. Do I need to be on the same netwo

  • Even though Nikon D600 is supported, I cannot download NEF

    I read all of the discussions about the Nikon D600 RAW/NEF download problems (all dated around Oct 2012). Today, the Apple information says that D600 RAW/NEF images can be downloaded into Aperture as long as you have version 3.4 or later. I just down