"Supports rollup to higher level of aggregation" property

Hi Gurus,
I am confused on the significance of check box "Supports rollup to higher level of aggregation", I see it always checked, i could understand from few notes that "It is selected so that data stored at this level can be aggregated to produce the total for its parent level without double counting or leaving anything out". If i uncheck it, i get the error message and RPD becomes inconsistent.
Please help me by example,
1. In which scenario we uncheck this option?
2. How does BI server execute the hierarchy with this option unchecked?
Thanks,
Sreekanth Jala

HI Kevin ,
Yes there are restrictions on the levels you put the note and the level you can edit the notes.and hence those will be greyed out at certain levels .
For more details please refer my earlier post [No notes can be processed in the current selection - Error Msg|Re: No notes can be processed in the current selection - Error Msg.]
Regards,
Digambar

Similar Messages

  • Can you post Notes (Display Note) at a higher level of aggregation than CVC

    I am only able to post Notes with the Display Note at the CVC level.  ALL PRODUCTS ALL CUSTOMERS and I receive the error message "No Notes can be processed in the current selection".  ONE PRODUCT ALL CUSTOMERS OR ALL PRODUCTS ONE CUSTOMER and I get display only no input (greyed out).  I can post notes at the CVC level.  Is this the only level you can post notes.

    HI Kevin ,
    Yes there are restrictions on the levels you put the note and the level you can edit the notes.and hence those will be greyed out at certain levels .
    For more details please refer my earlier post [No notes can be processed in the current selection - Error Msg|Re: No notes can be processed in the current selection - Error Msg.]
    Regards,
    Digambar

  • Errors in the high-level relational engine. The data source view does not contain a definition for the table or view. The Source property may not have been set.

    Hi All,
    I have a cube in which i'm using the TIME DIM that i created in the warehouse. But now i wanted a new measure in the cube which is Average over time and when i wanted to created the new measure i got a message that no time dim was defined, so i created a
    new time dimension in the SSAS using wizard. But when i tried to process the new time dimension i'm getting the follwoing error message
    "Errors in the high-level relational engine. The data source view does not contain a definition for "SSASTIMEDIM" the table or view. The Source property may not have been set."
    Can anyone please tell me why i cannot create a new measure average over the time using my time dimension? Also what am i doing wrong with the SSASTIMEDIM, that i'm getting the error.
    Thanks

    Hi PMunshi,
    According to your description, you get the above error when processing the time dimension. Right?
    In this scenario, since you have updated the DSV, it should have no problem on the table existence. One possibility is that table has been specified for tracking in the notifications for proactive caching, but isn't available any more for some
    reason. Please change the setting in Proactive Caching into "MOLAP".
    Reference:
    How To Implement Proactive Caching in SQL Server Analysis Services SSAS
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou
    TechNet Community Support

  • How to perform high-level planning without doing full costctr-level rollup

    I am wondering what my options are for this effort - we want to start our 2009 budget season by first preparing a high-level 2009 plan. To do the high-level plan we'd like to avoid building and interfacing workbooks at the cost center level; instead, to get started, we want to look at groups of cost centers that represent the different lines of business of the Bank (retail, credit, etc).
    Here's my question:is it possible to build planning workbooks not at the cost center level, but at a cost center rollup level? That is, we'd like to avoid having 10 workbooks at the cost center level that make up the rollup "Mortgage Lending"; instead we want one workbook for Mortgage lending that includes data for all 10 of those CC's.
    Please let me know if you have any ideas as to how to do so, thanks, W

    I am wondering what my options are for this effort - we want to start our 2009 budget season by first preparing a high-level 2009 plan. To do the high-level plan we'd like to avoid building and interfacing workbooks at the cost center level; instead, to get started, we want to look at groups of cost centers that represent the different lines of business of the Bank (retail, credit, etc).
    Here's my question:is it possible to build planning workbooks not at the cost center level, but at a cost center rollup level? That is, we'd like to avoid having 10 workbooks at the cost center level that make up the rollup "Mortgage Lending"; instead we want one workbook for Mortgage lending that includes data for all 10 of those CC's.
    Please let me know if you have any ideas as to how to do so, thanks, W

  • Spoke to High Level Support whine

    Basically, he said Software update or mass repair for a component.
    We shall see.
    Jake

    No,
    Basically, I spoke to customer relations because I was angry again since I got my replaced unit with the whine. They transferred me to tech support but she said he was high level (top tier) ...This guy said its really a shame and that they are working on it, he stated he was just down the hall and there are like 20 or 30 of these things open and engineering is working on it.
    He really couldn't say what the deal was - he just told me if its a software update, I'll be emailed by him before it goes public and he'll keep me posted. Otherwise, if it's a mass recall, then I'll know that too.

  • SQL Query to Roll up amounts starting from a lower to higher level

    with data as (select 'C' Child, 'P' as parent, 11 amount from dual
      union all select 'C1', 'C', -2 from dual
      union all select 'C2', 'C', 3 from dual
      union all select 'C3', 'C', -8 from dual
      union all select 'C4', 'C', 10 from dual
      union all select 'C11', 'C1', 7 from dual
      union all select 'C12', 'C1', 12 from dual
      union all select 'C21', 'C2', 5 from dual
      union all select 'C22', 'C2', 9 from dual
      union all select 'C31', 'C3', 6 from dual
      union all select 'C32', 'C3', -4 from dual
      union all select 'C41', 'C4', -3 from dual
      union all select 'C42', 'C4', 13 from dual
      union all select 'C111', 'C11', 16 from dual
      union all select 'C121', 'C12', 8 from dual
    ) Select * from data order by 2
    Hi Experts,
    I have the following table with parent child relationship which I would like to roll up starting from a lower to a higher level. The catch here is as I move up the level i should replace the amount for a parent with the aggregate value of its children and grand total aggregated amount should be in C
    For example; In the given sample data c111 rolls up to c11 as 16 (please note: the amount for c11 is 7 should not be added to 16 rather replaced by 16) which should further roll up to c1.
    This means c1 = c11c12; c11 = c111 = 16 so, c1= 1612 instread of 16712.
    Since this is a interim table, i do not have any control on how many levels it might contain.
    So dynamically, i need a query which would rollup and aggregate the amount from a lower to a higher level moving up from the bottom.
    Is this possible? I look forward to all the help. Thanks in advance.

    SQL> with data as
      2  (
      3    select 'C' Child, 'P' as parent, 11 amount from dual
      4    union all select 'C1', 'C', -2 from dual
      5    union all select 'C2', 'C', 3 from dual
      6    union all select 'C3', 'C', -8 from dual
      7    union all select 'C4', 'C', 10 from dual
      8    union all select 'C11', 'C1', 7 from dual
      9    union all select 'C12', 'C1', 12 from dual
    10    union all select 'C21', 'C2', 5 from dual
    11    union all select 'C22', 'C2', 9 from dual
    12    union all select 'C31', 'C3', 6 from dual
    13    union all select 'C32', 'C3', -4 from dual
    14    union all select 'C41', 'C4', -3 from dual
    15    union all select 'C42', 'C4', 13 from dual
    16    union all select 'C111', 'C11', 16 from dual
    17    union all select 'C121', 'C12', 8 from dual
    18  )
    19  , data1
    20  as
    21  (
    22    select parent
    23         , child
    24         , lpad('-', (level-1)*3, '-') || parent tree_structure
    25         , amount
    26         , case when connect_by_isleaf = 1 then amount else 0 end amount_leaf
    27      from data
    28     start with parent = 'P'
    29   connect
    30        by parent = prior child
    31  )
    32  select parent as node
    33       , tree_structure
    34       , amount_sum
    35    from data1
    36   model
    37   dimension by
    38   (
    39      child
    40    , parent
    41   )
    42   measures
    43   (
    44      amount_leaf as amount
    45    , tree_structure
    46    , 0 amount_sum
    47   )
    48   rules automatic order
    49   (
    50      amount_sum[any,any]= amount[cv(),cv()] + nvl(sum(amount_sum)[any, cv(child)],0)
    51   );
    NOD TREE_STRUCTURE            AMOUNT_SUM
    P   P                                 50
    C   ---C                              24
    C1  ------C1                          16
    C11 ---------C11                      16
    C1  ------C1                           8
    C12 ---------C12                       8
    C   ---C                              14
    C2  ------C2                           5
    C2  ------C2                           9
    C   ---C                               2
    C3  ------C3                           6
    C3  ------C3                          -4
    C   ---C                              10
    C4  ------C4                          -3
    C4  ------C4                          13
    15 rows selected.

  • HDMI Audio not working on Q190 (along with all higher level Audio Formats)

    Help, I have been given the run around via support, I cannot get the HDMI audio to work with my Pioneer Surround Sound, only the Intel display audio shows in control panel (Win 8 X64) and the RealteK S/P Dif port and it is not capable of supporting 7.1 sound or bitstreaming or DTS, Dolby HD, Etc. Tec support appears not capable of fixing the issue and wanted to send me to software support and pay. I have only had the machine for 4 days and it has never supported higher level sound.
    Every other device I have (had or currently) connected to the receiver works just fine. I have to figure this out or return the machine, the audio is the most important aspect for me. Besides when you advertise 7.1 support the machine you sell should be able to do it.

    Hey guys,
    I have had this Q190 with the Celeron CPU since last week and I am using XBMC Frodo and the HDMI is connected to my AVR Onkyo TX-NR809 and from the Onkyo to the TV. And the sound is 7.1 with PLIIZ. It works fine. I think it may be some driver problem because Realtek Audio which is in the Q190 works fine with the Win8 preinstalled. Realtek is kind of bad with driver because I lost my wifi after upgrading to Win 8.1. After a few days with no wifi, I found out that the driver was bad, yes it was a Realtek wifi driver but posted by Lenovo for W 8.1.
    I have another friend who also just bought the Q190 and he reported no audio problem so I think it is just a matter of trouble shooting the drive and configuration. I do love the form factor of the Q190.

  • Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of '

    When I deploy the cube which is sitting on my PC (local) the following 4 errors come up:
    Error 1 The datasource , 'AdventureWorksDW', contains an ImpersonationMode that that is not supported for processing operations.  0 0 
    Error 2 Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'AdventureWorksDW'.  0 0 
    Error 3 Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Customer', Name of 'Customer' was being processed.  0 0 
    Error 4 Errors in the OLAP storage engine: An error occurred while the 'Customer Alternate Key' attribute of the 'Customer' dimension from the 'Analysis Services Tutorial' database was being processed.  0 0 

    Sorry hit the wrong button there. That is not entire solution and setting it to default would work when using a single box and not in a distributed application solution. If you are creating the analysis database manually or using the wizard then you can
    set the impersonation to your heart content as long as the right permission has been set on the analysis server.
    In my case I was using MS Project Server 2010 to create the database in the OLAP configuration section. The situation is that the underlying build script has been configured to use the default setting which is the SQL Service account and this account does
    not have permission in Project Server I believe.
    Changing the account to match the Project service account allowed for a successful build \ creation of the database. My verdict is that this is a bug in Project Server because it needs to include the option to choose impersonation when creating the Database
    this way it will not use the default which led to my error in the first place. I do not think there is a one fix for all in relations to this problem it is an environment by environment issue and should be resolved as such. But the idea around fixing it is
    if you are using the SQL Analysis server service account as the account creating the database and cubes then default or service account is fine. If you are using a custom account then set that custom account in the impersonation details after you have granted
    it SQL analysis administrator role. You can remove that role after the DB is created and harden it by creating a role with administrative permissions.
    Hope this helps.

  • SUS - Added Data Type Enhancement and Higher level Proxies are not active

    Hello,
    I've added a field to our current data type enhancement Z_Purchase_Order_Item.  Once I regenerate the proxy on the enhancement and activate it the field appears as it should in the high level items that use the enhancement (PurchaseOrderRequest_In).  But those proxies have become inactive and when I try to activate them I get this message:
    Interface II_BBPX1_SUS_PO was repaired before the Modification Assistant was enabled. 
    All Modification Assistant functions only apply to future modifications, not to those already
    undertaken.  This means:
    -The modification overview only displays future modifications.
    -When resetting to the standard, the system will reset all objects to their current version, since
    the actual standard can no longer be identified by the Modification Assistant.
    -Support for adjustment after an upgrade will only be available for future modifications. 
    Modifications that already exist must be re-made manually using version management.
    The next message says:
    Object can only be created in SAP package.
    Then the status bar shows "Proxy Activated".  But when I close and reopen the proxy I see that it is once again inactive. 
    Does any know what I need to do to activate this proxy? 
    Thanks,
    Matt

    In SPROXY you can open your proxy and then view the Activation Log under the GoTo menu.  The log will explain better what the problems might be.  In my case I needed to activate another data type enhancement first.
    Thanks,
    Matt

  • Why does OWB 9.2 generate UK's on higher levels of a dimension?

    When you specify levels in a dimension, OWB 9.2 generates unique key constraints in the table properties for every level, but only the UK on the lowest level is visible in the configuration properties. Why then are these higher level UK's generated? Is this a half baked attempt to implement the possiblility to generate a snow flake model in OWB?
    Jaap.

    Piotr, Roald and others,
    This is indeed a topic we spend a lot of our time on these past months. We are addressing this as (in my old days I had the same problem as a consultant) we know that this is a common problem.
    So the solution is one that goes in 2 directions:
    - Snowflake support
    - Advanced dimension data loading
    Snowflake is obvious, may not be desired for various reasons but we will start supporting this and loading data for it in mapping.
    If you want a star table, you will know that a completely flattened table with day at the lowest level will not be able to get you a unique entry for month. So what people tend to do is one of the following:
    - Proclaim the first of the month the Month entry point (this stays closest to the star table and simply relies on semantics on both ETL and query side).
    - Create extra day level entries which simbolize the month, so you have a day level with extra entries
    - Create views, extra tables etc to cover the extra data
    - Create a data set within the tables that solves the key problem
    We have opted for the last one. What you need to do for this is a set of records that uniquely identify any record in any level. Then you add a key which links to the dimension at the same point (a dimension key), so all facts always use this surrogate key to link (makes life in query tools easier).
    For a time dimension you will have a set of day records with their months etc in them (the regular star). Then you add a set of records with NULL in the day having months and up. And you go up the hierarchy. For this we will have the ETL logic (in other words you as a designer do not worry about this!). On the query tool you must be a little cautious on counts but this is doable and minor.
    As you can see none of the solutions are completely transparent, but we believe this is one that solves a lot of problems and gives you the best of all worlds. We will also support the same data structure in the OLAP dimensions for the database as well in the relational dimension. NOTE that there are some disclaimers with this as we are doing software here...
    In principal however we will solve your problem.
    Hope this explains some of our plans in this area.
    Jean-Pierre

  • Why do we need to plan promotions at the lowest level of aggregation

    Hi,
         The documentation says that we need to plan the promotions at the lowest level of aggregation i.e., the material level. Why? Is there a specific reason for this? Can we plan at other levels of aggregation as well? What happens if we plan at higher level;s of aggregation?
    Thanks.

    I think it is possible to do it in an aggregated level however you need to define your distribution rules in order to get the desired result, you need also to consider that if distribution rules changes and the value after promotional planning returns the same value, it is possible that detailed level are not realigned to the new distribution rule (e.g. regarding another ratio).
    Maybe this is one of several causes.
    Regards,
    Carlos

  • High Level Recommendations For Multi-Tier Application

    Hello:
    I have been reviewing Windows Azure documentation and I'm still somewhat confused/unsure regarding which configuration and set of services is best for my organization.  I will start off by giving a high level description of the what the environment
    should be.
    A) 2 "Front End" IIS Instances, Load Balanced running an MVC 4.0/.Net 4.5 Web Application
    B) A "dedicated" SQL SERVER 2008 R2 server with medium-high resources (ample RAM and processing power)
    C) An application server which hosts a Windows Service.  This service will require access to the SQL Server listed in B. In addition the IIS "Front Ends" listed in A should have access to a "shared" folder or directory where files
    can be dropped and processed by this windows service.
    I have looked at Azure Web Site, Azure Virtual Machines and Cloud Services and I'm not sure what is best for our situation.  If we went with Azure Web Sites, do we need TWO virtual machines, or a single virtual which can "scale out" up to
    6 instances.  We would get a Standard Web Site, and the documentation I see says it can scale out to 6 instances. I'm somewhat confused regarding the difference between a "Virtual Machine" and an "Instance".  In addition, does
    Azure Web Sites come with built in load balancing between instances, virtual machines? both?  Or is it better to go with Azure Virtual Machines and host the IIS Front end there?  I'm just looking for a brief description/advise as to which would be
    better.
    Regarding the SQL Server database, is there a benefit to using Azure SQL Database? Or should we go with a virtual machine with SQL Server installed as the primary template?  We have an existing SQL Server database and initially we would like to move
    up our existing schema to the Cloud.  We are looking for decent processing power for the database and RAM.
    Finally the "application" tier, which requires a Windows Service. Is an Azure Virtual Machine the best route to take? If so, can an Azure Web Site (given that is the best setup for our needs) write to a shared folder/drive on a secondary virtual
    machine.  Basically there will be json instruction files dropped  into a folder which the application tier will pick up, de-serialize and do backend processing.
    As a final question, if we also wanted to use SSRS, is there updated/affordable pricing and hosting options for this as well?
    I appreciate any feedback or advice.  We are definitely leaning towards Azure and I am trying to wrap my head around what our best configuration and service selection should be.
    Thanks in advance

    Hi,
    A) 2 "Front End" IIS Instances, Load Balanced running an MVC 4.0/.Net 4.5 Web Application
    B) A "dedicated" SQL SERVER 2008 R2 server with medium-high resources (ample RAM and processing power)
    C) An application server which hosts a Windows Service.  This service will require access to the SQL Server listed in B. In addition the IIS "Front Ends" listed in A should have access to a "shared" folder or directory where files can be dropped and
    processed by this windows service.
    Base on my experience and your requirement, you could try to use this solution:
    1.Two cloud service to host your "front end" web application. Considering to Load Balanced, You could use traffic manager to set Load Balancing Settings.
    2. About sql server or ssrs, you have two choice:>1,create a sql server vm  >2, use sql azure and azure ssrs
    I guess all of them could meet your requirement.
    3. About your C requirement, which type application is? If it is website, You could host it on azure website or cloud service.
    And if you want to manage the file by your code, I think you could save your file into azure blob storage. You could add,delete file using rest API(http://msdn.microsoft.com/en-us/library/windowsazure/dd135733.aspx
    ) or code(http://www.windowsazure.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs-20/ ). And the Blob storage could be as a share file
    folder.
    And for accurately, about the billing question , you could ask azure billing support for more details.
    try this:http://www.windowsazure.com/en-us/support/contact/
    Hope it helps.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • High level estimation for datasource enhancement

    hi,
    I have been assigned in supporting project recenlttly.  Now we are going to data source enhancement(we need one more field i.e juridiction code for tax calculation, Datasource is 0FI_GL_4), we need to high level esstimation for  the procedure and time duration for each process (Devolepment  to Prod).
    Can any one help me in this concern......
    Thanks,
    Shaliny

    Shaliny,
    If the field is ready available to add with no hicups, everything can be done in 3 days.
    Well, if you need to go for an customer exit during enhancement, you need at least 15 days which includes testing also.
    It generally depends on how complex it is expected to be. But have a buffer of atleast 2 days after completion from your side.

  • High-level interrupt handler

    Why can I decide to support a high-level interrupt or not? Under what condition does the Solaris kernel will map my hw interrupt (INTA from PCI bus) to a high-level interrupt? When should I refuse to support a high-level interrupt? Why? Can I force my hw interrupt to be a high- level interrupt?
    Also think about that, most hw interrupts indicate something important such as the case buffers are full. If they are assigned below the scheduler's, it really does not make sense.
    Is it possible to block any hw interrupts? Or I'd put it this way can I prioritize hw interrupts in Solaris?
    Thanks
    tyh

    Hi,
    On x86 each IRQ has a software priority assigned to it implicitly by the bus driver, although I think you could override it in the driver.conf. Unlike SPARC, the processor doesn't support a PIL so software priorities are implemented by masking all lower-priority IRQs and re-enabling interrupts.
    High priority interrupts, above dispatcher level, run in the context of the current thread on the cpu, normal level interrupts are handled by interrupt threads.
    The interrupt threads are the highest priority threads on the system, so will preempt any other running threads. In addition mutexes in Solaris use priority inheritance, so the interrupt threads will get to run.
    In general, high level interrupts are allocated to devices with small buffers such as serial or floppy, so that their buffers get serviced in the fastest possible time. Others can afford to wait for just a bit.
    Your driver should check to see if its device has been allocated a high level interrupt. If this is the case, the high level handler should clear the interrupt and save the data/status (in the driver state structure perhaps) and trigger your soft level interrupt handler (which will run as a thread).
    Blocking of interrupts is done for you when you acquire a spin mutex (ie initialised with an iblock cookie). Such a mutex is required to synchronise access to data shared with a high level handler in your driver.
    Please take a look at the Intel Driver writers orientation at:
    http://soldc.sun.com/developer/support/driver/docs/Solaris_driver_models/index.html
    Hope that helps,
    Ralph
    SUN DTS

  • High-Level JTS/TopLink design question

    I've gone through the "using JTS with TopLink" docs, and it mostly makes sense. However, I still don't understand how TopLink "knows" when I call acquireUnitOfWork() whether or not I'm participating in a distributed 2PC transaction.
    Said another way:
    Let's say I've got an application based on TopLink (registering appropriate JTS stuff) that exposes an API that can be accessed remotely (RMI, SOAP, whatever).
    And, I've got another, separate application using a different persistence-layer technology (also supporting JTS) that also has an API.
    Now, I create a business method that uses the APIs from both of these applications, and I want them to participate in a single, distributed transaction.
    At a high level (source code is unnecessary), how does that work?
    Would the API need to support an ability to specifiy a TransactionContext or is this all handled behind the scenes by the 2 systems registering with the Transaction Service?
    If this is all handled through registration, how do these 2 systems know that these specific calls are all part of the same XA transaction?

    Nate,
    TopLink particiaptes in JTA/JTS transactions but dows not control them. When you configure TopLink to use the JTA/JTS services of the host application server you are deferring TX control to the J2EE container. TopLink will in this case register each acquired UnitOfWork in the current active TX from the container. The container will also ensure that the JDBC connection provided to TopLink is also bound by the active TX.
    In order to get 2PC you must register multiple resources into the same JTA TX. The TX processing during commit will then make the appropriate call backs to the underlying data source as well as the necessary call backs to listeners suchs as TopLink to have its SQL issued against the database.
    In short: The J2EE container manages the 2PC TX and TopLink is just a participant.
    Doug Clarke

Maybe you are looking for

  • The applicatio​n has failed to start dll bcaz libmex.dll was not found. Re-install​ing the applicatio​n may fix this problem

    Hi ,   I am new to Labview & Matlab related Software. I am using LabView 8.20 and Matlab 6.5.1.199709 . I generated simple .dll(for adding 2 numbers) file with the help of Matlab and trying to use that dll file in Labview using "Call Library Function

  • WHY OH WHY NO HTML SUPPORT MAIL 5.1????

    Guys First of all well done on Lion. It is a geat looking OS and I personally LOVE IT. I must however askk wht for the sake of not having to run MS on my mac is there still no support for HTML mails in mail for mac 5.1? The ONLy reason I am running o

  • XI Adapter  Queries

    Hi All, 1. What are the functions of JMS Adapter and RFC Adapter and which scenario these adapters should be used in ? 2. Why are HTTP Adapter and IDOC Adapter resides on the   ABAP stack and not in the J2EE Adapter Engine ? 3. What are functions of

  • Coherence Groovy Query Language Console

    Hi, we have developed enhanced Coherence query console within our data rating project for telco operator. The idea may come handy for other Coherence users, if you are interested details and source can be found on [github - project CGQL|https://githu

  • Batch management in production order for WM managed components

    Hi Experts, Iam having WM managed components in my production order which are batch managed as well. I have to do batch determination for these WM managed components. I would like to know the steps involved in it and the required configuration. Can a