Aggregation within an Infospoke

Hello,
Has anyone been able to extract aggregated data using an infospoke in BW 3.5? The aggregation would be dictated by the characteristics one includes on the infoobject tab. So for example, if I have order # and line item # in the cube and I only include order # in the spoke, all my KFs should be summed up across lines at the order level. This manner of aggregation worked in BW 3.1.

Bhanu,
What SP level are you on? Is there any flag (besides) characteristics you include in your iobj tab that dictate the aggregation? Are there any special entries in your RSADMIN table relating to this?
Manish

Similar Messages

  • Double aggregation in a single query block doesn't make any sence.

    How can I argue with something that apparently has been cast in stone by ANSI SQL committee? Well the answer is famous: "Search any park in any city: you'll find no statue of committee".
    OK, why
    select count(1) from (
    select deptno from emp
    group by deptno
    is an easy to understand query, and why
    select count(count(*)) from emp
    group by deptno
    is not? I already mentioned one reason why count shouldn't accept any arguments, therefore count(count(*)) is a nonsence.
    The other reason is that aggregation without grouping is essentially aggregation within a single group. Once you realize that
    select sum(1) from emp
    is the same as
    select sum(1) from emp
    group by -1
    (where -1 or any other constant for that matter is a dummy pseudocolumn), then it becomes obvious that what we are doing in the infamous
    select count(count(*)) from emp
    group by deptno
    is a query with two blocks
    select count(1) from (
    select deptno from emp
    group by deptno
    ) group by -1
    We are not allowed to combine two "group by" into a single query, aren't we?

    Aggregate function always goes together with grouping. Grouping can partition the set of rows into many classes or a single class. Therefore, if we have 2 nested aggregation functions, we'd better be able to identify the corresponding groupings easily:
    select state, avg(min(tax_return)) from household
    group by city, state then statewhich is a shorthand for
    select state, avg(m) from (
       select city, state, min(tax_return) m
       from household
       group by city, state
    ) group by stateSpeaking of double aggregation, it is frequent in graph queries. The part explosion query is posted repeatedly virtually every month on this fine forum:-) The part explosion is double aggregation: multiply the quantities along each path in the assembly hierarchy. Then add the quantities along alternative paths. Likewise, finding a shortest path between two nodes in a graph is double aggregation query. First, we calculate the length buy adding the distances along each path, and then we choose a path with minimal length. Wouldn't it be nice to have this double aggregation wired into the connect by syntax? Note that connect_by_path is a surrogate aggregate which concatenates strings. People invent all kind of functions which parse this path and make other aggregates out of this value (such as sum and product).

  • Errors using themes and tomahawk within portlets

    I've been developing a portlet using Creator2 Update 1 that uses the myfaces tomahawk library of components. Besides the difficulty of not being able to use the visual designer, I've managed to get my portlet functioning the way I want.
    My next step was to apply styles to my portlet. The difficulty here is that since portlets are aggregated within a portal, the portlet does not have access to the <head> tag of the container page (meaning I couldn't just add my own stylesheets and link them in.) I'm not sure how Creator works around this problem - I just know that it manages to using <ui:themeLinks> somehow.
    Changing the theme for a regular Creator-components-only (read: no tomahawk) is a no-brainer. I simply pick a different theme in the Project view and set it as the Current Theme. I can even set my own user-defined theme. Running the portlet either through Creator or Liferay shows the applied theme.
    However, once I start using tomahawk components, the theme system breaks down, throwing exceptions, such as the following:
    com.sun.rave.web.ui.theme.ThemeConfigurationException: WARNING: the Sun Web Components could not load any themes.
    at com.sun.rave.web.ui.theme.ThemeFactory.createThemeManager(ThemeFactory.java:274)
    Curiously, it only breaks if I use one of my user-defined themes. The Creator-provided themes will work with the tomahawk components portlet.
    In short, I am baffled. Creator-provided themes work regardless of tomahawk components present. My own themes work so long as there aren't tomahawk components present.

    Sorry, I don't think that's going to work. Themes are not used in the standard SES index, and therefore the Oracle Text knowledgebase is not installed - hence the DRG-11446 error you're seeing.
    If you figured out a way to install the knowledgebase from another system (and I'm NOT recommending that), you would still need to recreate the text index with INDEX_THEMES turned on.
    You should be able to connect to the SES instance from a remote machine by commenting out both "tcp.invited_nodes" and "tcp.validnode_checking" from the sqlnet.ora file. Not sure why just adding an entry to tcp.invited_nodes didn't work for you.

  • Negative result caching, aggregation threads

    I have two questions:
    1. Do any of the coherence caches do "negative" result caching? An example to explain what I mean:
    I have a near cache in front of a partitioned cache which is backed by a database. I do a get which looks in the near cache, partitioned cache, and DB and doesn't find the value. If I then do another get for the same key will coherence go all the way to the DB again to look for it? Does containsKey work the same way?
    2. Is it to increase the number of threads used for aggregation on a single coherence node? I have a machine with lots of cores, and a parallel aggregator that uses a fair bit of CPU. I would like Coherence to run multiple instances of the aggregator in parallel without me having to start lots of processes.

    Hi Cormac,
    if I understand correctly, what you mean is: in case there are idle threads in the thread-pool, you want them to be utilized by multiple threads working on the same aggregation within the same storage node, dividing the partitions among them.
    Splitting a single partition between multiple aggregators would contradicts answers to questions regarding the behaviour of aggregators and possibly also break documented API, and anyway would render parallel aggregations unusable by weakening guarantees about aggregating entries for which a partition affinity is defined together with each other.
    The above things are not possible in the current version, and I am not sure if it is possible in upcoming versions, but some changes in the just released developer pre-release version make this less costly than it was up to 3.4.2.
    One of the problems is that AbstractAggregator is stateful in the sense that it wants you to maintain a temporary result in an attribute of the aggregator, therefore
    - either your code would have to be thread-safe (which requirement is not documented and therefore introducing it would possibly break existing code out there). This would possibly also mean an increased cost in context switching due to synchronizing your changes to those attributes on multiple threads.
    - or Coherence would have to instantiate multiple instances of your aggregator within the same storage node which comes with a somewhat increased memory footprint. Otherwise this would be doable.
    On the other hand, you should remember that just because a thread is idle at a moment, it does not mean that there won't be many more requests coming in very soon afterwards which would be unnecessarily delayed by the parallel aggregator which consumes too many threads.
    Best regards,
    Robert

  • Infospoke (Open Hub)

    Hi,
    I want to retrieve the contents of a query to a flat file suing infpsoke and not rscrm_bapi due to some consistency issues. The query i am interested in has certain calculated key figures (summations of cumulative key figures). Since the infospoke cannot extract calculated key figures into a table or file, is there an option of doing this?
    Is BADI an option to bring in the calculated key figures into a file or table?
    Thanks

    Hi,
    You can use the BADI within the infospoke to actually perform the calculations and then extract out the data to table or file. So, yes you can basically manipulate the data using the BADI and then extract it.
    Cheers,
    Kedar

  • Cube and aggregation

    Hi All,
    I have created a cube using OWB wizard. The aggregation tab applies to all measures of the cube, where as i want to apply aggregation at an individual measure. Any idea.
    Jak.
    Message was edited by:
    jakdwh

    Hi,
    i have tried to assign differnet aggregation roles to differing measures within the OWB Cude editor without success.
    I created a simple dimension with two measures one requiring a sum aggregation and one requiring an average aggregation. I can set them ok and vailidate ok but when I close the editor and re-open theey have bothe been set to the overall cube aggregation policy. Even if this is set to NOAGG they all become NOAGG. It seems there must be a flaw in the implementation of aggregation within OWB.
    Anyone found a resolution or work-around? I came accross this wjhen trying to utilise degenerate dimension keys in the cube definition....
    Robbie

  • More than 1000 photos on photo stream?

    Is there a way to add more than 1000 photos to photo stream?

    No but you could probably create a shared photo stream to hold another 1000 photos and only invite yourself to the shared photo stream if you didn't want to really share them.  Haven't tested this but I don't believe your photo stream and shared photo streams are aggregated within this limit.

  • How to Aggregate "Value Based Hierarchies"

    I've a tipical parent-child dimension ( Company ).
    I've created Company dimension using Analitic Workspace Manager defining the Hierarchy as "Value Based Hierarchy".
    In the fact table there are some entries related to non leaf nodes of the Company dimension.
    After the loading process I view the cube but I see the aggregation computed only on leaf nodes, intermediate node's amounts were not summed.
    Example on Company table ( from "Kimball Design Tip #17: Populating Hierarchy Helper Tables" )
    Microsoft               
    ---Software
    ------Consulting
    ------Products=================AMOUNT==>100.0
    ---------Office
    ------------Visio
    ---------------Visio-Europe====AMOUNT==>100.0
    ---------Back-Office
    ------------SQL-Server
    ---------------OLAP-Services===AMOUNT==>100.0
    ---------------DTS
    ---------------Repository======AMOUNT==>100.0
    ---------Developer-Tools
    ---------Windows
    ---------Entertainment
    ------------Games
    ------------Multimedia
    ------Education
    ---Online-Services=============AMOUNT==>100.0
    ------WebTV
    ------MSN
    ---------MSN.co.uk
    ---------Hotmail.com
    ---------MSNBC
    ------------MSNBC-Online=======AMOUNT==>100.0
    ---------Expedia
    ------------Expedia.co.uk
    Expected Aggregation: 100 + 100 + 100 + 100 + 100 + 100 = 600
    But I see only 400. , it seems that Online-Services and Products Entries were not aggregated.
    What's wrong?
    Andrea

    Aggregation aggregates from lower levels to higher levels overwriting whatever values might be in the upper level cells. Aggregation within the analytic workspace is not cummulative. The problem is that commulative aggregation is to compute the first time the data is aggregated, but it's very difficult to understand what do in later aggregations. E.g., in your example you would expect the following values from these nodes (I'm not including the whole example:
    Microsoft 400.00
    ---Software 400.00
    ------Consulting 400.00
    ------Products=================AMOUNT==>100.0
    ---------Office 100.00
    ------------Visio 100.00
    ---------------Visio-Europe====AMOUNT==>100.0
    ---------Back-Office 200.00
    ------------SQL-Server 200.00
    ---------------OLAP-Services===AMOUNT==>100.0
    ---------------DTS
    ---------------Repository======AMOUNT==>100.0
    If you aggregate again and aggregation is cummulative, you would get:
    Microsoft 1900.00
    ---Software 1500.00
    ------Consulting 1100.00
    ------Products=================AMOUNT==>700.0
    ---------Office 300.00
    ------------Visio 300.00
    ---------------Visio-Europe====AMOUNT==>100.0
    ---------Back-Office 400.00
    ------------SQL-Server 400.00
    ---------------OLAP-Services===AMOUNT==>100.0
    ---------------DTS
    ---------------Repository======AMOUNT==>100.0
    That is, there's no way to understand whether to accumulate or not.
    You can model this by adding dummy detail members for all summary members and aggregating from those. E.g.,
    Microsoft
    ---Software
    ------Consulting
    ------Products
    ------------------- Products Detail 100.00
    ---------Office
    ------------Visio
    ---------------Visio-Europe====AMOUNT==>100.0
    ---------Back-Office
    ------------SQL-Server
    ---------------OLAP-Services===AMOUNT==>100.0
    ---------------DTS
    ---------------Repository======AMOUNT==>100.0
    ---------Developer-Tools
    ---------Windows
    ---------Entertainment
    ------------Games
    ------------Multimedia
    ------Education
    In this case, you can aggregate as often as you like.

  • Validation Error: VLD-1111

    I'm trying to Validate a mapping between a source and staging table and recieve the following error message: "VLD-1111: The mapping cannot be generate due to operators requiring different code languages."
    My source table is in an Oracle 9i DB and so is the target staging table, which is in the OWB repository. What does this error mean?
    TIA,
    David Wagoner
    Oracle DBA

    David,
    This validation error typically is given when you mix for example SQL*Loader operators with SQL operators. For example you place an aggregator within a mapping that contains a flatfile as a source. SQL*Loader cannot determine what to do with the aggregator and PL/SQL does not know how to handle the file.
    As you seem to have a table as source and as target, I'm wondering if there are any other operators on the canvas that are intended for SQL*Loader like the data generator? If that is the case replace the data generator by either a constant or a sequence, that should take care of the validation message.
    Jean-Pierre

  • AWM 11g too slow, why?

    Hello,
    I installed the Oracle client on a client machine (Windows). I work on AWM11g with the patch ID # 6368282.
    On the server side (linux), there is Oracle Database 11g Enterprise Edition with the patch ID # 6459753.
    In my OLTP system I created, to test, 2 tables (radar and city) and a table in the middle with each of these 90 recordings.
    My problem is:
    When I created my dimension, mapping, etc.. and that I update my cube that it take within a few seconds but when I want to see the cube, AWM is blocked for several hours and on the server side it work very hard.
    I do not understand why it take so long with so few records.
    Someone can explain it to me?
    Thanks in advance

    There is not enough information here to determine what is happening. I am not sure what you mean when you state
    "I update my cube that it take within a few seconds but when I want to see the cube, AWM is blocked"
    If you run the load from AWM, once the cube has updated and completed, a window will pop-up showing the log for the data load. If this window does not appear then the load is still running. If you submit the job to the job queue, you will need to exit from AWM to allow the job to complete. If you remain attached to the AW this will block the job from running since AWM attaches the AW in read-write mode but the refresh job needs read-write exclusive access.
    The other option is your dimensions have circular references within the parentage and the aggregations within your dimensions are looping endlessly. Did you assign surrogate key usage to your dimension members?
    Keith Laker
    Oracle Data Warehouse Product Management
    OLAP Blog: http://oracleOLAP.blogspot.com/
    OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
    DM Blog: http://oracledmt.blogspot.com/
    OWB Blog : http://blogs.oracle.com/warehousebuilder/
    OWB Wiki : http://wiki.oracle.com/page/Oracle+Warehouse+Builder
    DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html

  • How to reduce CPU Utlisation while working with OLAP Cubes?

    Hi All,
    I am working on OLAP cubes on MS SQL Server 2005. I have imported the same in the OBIEE rpd and when I query it in answers, the CPU Utilisation increases to 98%.
    So the results also take pretty much time to get displayed.
    Is there any other method with which I can reduce the CPU usage?
    Regards,
    Apoorv

    Sorry, but that's like asking "My Data Warehouse is slow! What can I do?!".
    Starting from the top of the food chain it depends on:
    - how your report is built
    - what the logical SQL is that this report fires against the BI server
    - your logical business model
    - your physical model in the rpd
    - whether you use BI server aggregation or external aggregation for the cube source columns
    - the structure of your cube
    - whether your cube is pre-aggregated or not
    - whether you did any performance tuning or your cube
    ...and tons of other things. I can query 5 dimensions with thousands of dynamic and shared members against dynamic accounts (not stored) without performance impact if I'm hitting an optimized cube (or area of my cube) but have > 1min performance for another comparatively simple query which queries a non-aggregated cube and does aggregation within the BI server (just as an example).
    Start by verifying that your cube is nicely built.

  • Extract aggregate data from R/3

    Hello SDN,
    we want to extract data from a table in R/3 which helds detailed data - much more detailed than we like to query in BW. Of course we can do aggregation in BW
    but in fact we are not interested in the detailed level so why transfer the detail?
    In our case we extract about 5 Mill. records (in one request!) but in the end we are writing only 100.000 to the ICube.
    As an example you can imagine AMOUNT to be held at item level but we only want to extract at document level.
    I suppose that this is a very frequent requirement and I guess that somebody might have come across it.
    Currently I am thinking about using a function module to do it.
    What would be the best way to do it in terms of performance?
    Using DB Cursors? Collect? Select ...group by?
    Thanks for sharing your experience and kind regards

    Hi Joachim,
    why will you spend a lot of time and money to create your own aggregation within a abap whereas BW, a ODS-Object or a Cube will do that for you. You have to read the data anyway. So the only difference is the number of records transferred from R/3 to BW. So just go and extract the detail and summarize it in BW.
    By the way: If do an initial load of COPA data it is easily possible that you will extract 20 and more Million records.
    regards
    Siggi

  • Error - failed to load 'sap/m/columns.js'

    Hi,
    I'm getting the following error as soon as I add <columns> tag inside a <Table> tag. Error:
    Uncaught Error: failed to load 'sap/m/columns.js' from resources/sap/m/columns.js: 404 - Not Found
    Here is main.view.xml:
    <mvc:View
      controllerName="com.ttf.orgoto.view.main"
      xmlns:l="sap.ui.layout"
      xmlns:mvc="sap.ui.cores.mvc"
      xmlns:html="http://www.w3.org/1999/xhtml"
      xmlns="sap.m">
      <Page title="Organizasyon Otomasyonu">
      <content>
      <IconTabBar>
      <items>
      <IconTabFilter
                            icon="sap-icon://retail-store"
                            text="Mekan Seçimi">
      <Table id="placeTbl" >
      <headerToolbar>
      <Toolbar>
      <Label text="Mekan Seçimi"></Label>
      </Toolbar>
      <columns>
      </columns>
      </headerToolbar>
      </Table>
      </IconTabFilter>
      <IconTabFilter
                            icon="sap-icon://meal"
                            text="Sipariş Seçimi">
      <Text text="Info content goes here ..." />
      </IconTabFilter>
      </items>
      </IconTabBar>
      </content>
      </Page>
    </mvc:View>
    Here is index.html:
    <!DOCTYPE HTML>
    <html>
      <head>
      <meta http-equiv="X-UA-Compatible" content="IE=edge">
      <meta http-equiv='Content-Type' content='text/html;charset=UTF-8'/>
      <script src="resources/sap-ui-core.js"
      id="sap-ui-bootstrap"
      data-sap-ui-libs="sap.m,sap.ui.table"
      data-sap-ui-xx-bindingSyntax="complex"
      data-sap-ui-resourceroots='{"com.ttf.orgoto": "./"}'
      data-sap-ui-theme="sap_bluecrystal">
      </script>
      <!-- only load the mobile lib "sap.m" and the "sap_bluecrystal" theme -->
      <script>
      sap.ui.localResources("view");
      var app = new sap.m.App({initialPage:"idmain"});
      var page = sap.ui.view({id:"idmain", viewName:"com.ttf.orgoto.view.main", type:sap.ui.core.mvc.ViewType.XML});
      app.addPage(page);
      app.placeAt("content");
      </script>
      </head>
      <body class="sapUiBody" role="application">
      <div id="content"></div>
      </body>
    </html>

    You're adding the <columns> aggregation within the <headerToolbar>. It can be added after </headerToolbar>.
    https://sapui5.hana.ondemand.com/sdk/explored.html#/sample/sap.m.sample.Table/code
    - Sakthivel

  • What is the difference between drill across and drill down?

    Hi Friends,
    Please give a clarification about Drill across,Drill down,Drill through With Examples?
    Thanks in Advance
    Rani

    Hi,
    1 - Drill Down: When we are at a higher level of aggregation and want to go to a lower level within the same size. Ex Time Dimension: year to quarter to month ...
    2 - Drill UP: When we are in a lower level of aggregation and want to go further in the same size. Ex Time Dimension: trimenstre for months to years.
    3-Drill Across: When we are in a lower level of aggregation and want to go to a minor or vice versa, in different environments. Ex Dimenão Time: year quarter in a Data Mart for Essbase Dimension in Time: months in a DW in a relational database.
    4 - Drill Through: When we are at a level of aggregation within any one dimension of this level and go to another in another dimension. Ex Product dimension: product name to Dimension Client: client name, or want to know what products or those that customers bought.
    For more Kindly refer:
    http://gerardnico.com/wiki/dat/obiee/drill
    http://gerardnico.com/wiki/analytic/drill_down_up
    http://gerardnico.com/wiki/analytic/drill_across
    http://gerardnico.com/wiki/analytic/olap_operation
    http://obieetips.blogspot.com/2009/05/obiee-drill-throughdrill-down.html
    Thanks
    Deva
    Edited by: Devarasu on Dec 12, 2011 4:28 PM

  • Viewing logs in Centralized Management

    Hi Everybody,
    Before enabling Centralized Management on my ESAs (C370 with AsyncOS 7.6.3), I used to check problem issues by viewing the logs in the "Log Subscription page" of the gui. On this page, there was a very nice column called "Log Files", with direct HTTPS access to the logs.
    But after enabling Centralized Management, this column just disappear...
    Url are still valid (like for anti-spam logs :
    https://@IP/system_administration/log_list?CSRFKey=3f8c9d04-40ce-4b95-a781-b41cb6261e00&log_type=antispam), but there is no more link to access it.
    Any idea to restore these links, or access logs via HTTPS ?
    Thank you for your help.
    Best Regards
    Quentin

    Quentin -
    Log files for cluster are not stored at machine level – and therefore are not going to be available.  The only retrieval methods are FTP, SCP push, syslog push.
    Usually on the System Administration - Log Subscription page there are the following columns:
    - Configured Log Subscriptions
    - Type
    - Log Files
    - All, Rollover
    - Delete
    As soon as a cluster is created the Log Files column containing the ftp links doesn't appear anymore, which is normal behavior.
    Per the Configuration Guide:
    Manually Download
    This method lets you access log files at any time by clicking a link to the log directory on the Log Subscriptions page, then clicking the log file to access. Depending on your browser, you can view the file in a browser window, or open or save it as a text file. This method uses the HTTP(S) protocol and is the default retrieval method.
    Note: Using this method, you cannot retrieve logs for any computer in a cluster, regardless of level (machine, group, or cluster), even if you specify this method in the CLI.
    Per the Advanced Guide:
    Q. Are log files aggregated within centrally managed machines?
    A. No. Log files are still retained for each individual machines. The Security Management appliance can be used to aggregate mail logs from multiple machines for the purposes of tracking and reporting.
    Hope that helps!
    -Robert

Maybe you are looking for