Oracle AW OLAP or Cognos PowerPlay for cube building

Starting a Pilot project for BI reporting. We have two options we are exploring and testing OLAP in Oracle 10G AW and Cognos PowerPlay. Our intent is to use Cognos ReportNet for our reporting tool.
I tend to lean toward building Dimensions/Cubes in the Oracle Analytical Workspace rather then Cognos PowerPlay.
I need to know the pros and cons of Cube design in Cognos vs Oracle AW from the experts out there, not enough experience under my belt.
Also if I head down the path of utilizing the Oracle AW, can I efficiently use Cognos ReportNet to access the AW? Is this via SQL or the Oracle OLAP API?
Thanks All

Hi, can't comment on your first question.
Yes, you can use Cognos ReportNet with OLAP cubes. You can use any SQL emitting query tool to access the views that you create over OLAP 10g. note, these views are created dynamically in Oracle OLAP 11g which makes it very easy.
You/ tools write SQL against the cube views, and leverage the aggregated data, and calcs in the cube, as well as having the capabilities of Cognos ReportNet. There may be a few tricks you need to do to have Cognos ReportNet write efficient SQL.. maybe someone else out there has some more input?

Similar Messages

  • Using or migrating Cognos Powerplay Cubes in/to Oracle OBIEE V 11.1.1

    Hello,
    is it possible somehow to use cognos cubes (used in Cognos Powerplay) in OBIEE out of the box? Or with some tricks?
    If not, is it possible somehow to do a elegant migration, without starting from the scratch?
    Any ideas or good links, informations about this topic?
    thanks ahead?
    Eric
    Edited by: user8824510 on 21.03.2012 09:30

    Hi Tina,
    I unzipped the install file to the root drive i.e. (C:\bishiphome in my case or D:\bishiphome in your case). Some of the folder structure for SampleAppLite goes pretty deep, and I'm wondering since you've got the install folder nested in some folders, it might start hitting Windows file path limit.
    I know for a fact, when I installed 11g to C:\app\oracle\product\obiee11\<obiee11>, when it came time to uninstall, I couldn't delete the SampleAppLite files because their file path's were so long. From then on, I just installed it to the root directory to avoid problems.
    Hope this helps!
    -Joe

  • Oracle OLAP cube build question

    Hello,
         I am trying to build a reasonably large cube (around 100
    million rows from the underlying relational fact table). I am using
    Oracle 10g Release 2. The cube has 7 dimensions, the largest of which
    is TIME (6 years of data with the lowest level day). The cube build
    never finishes.
    Apparently it collapses while doing "Auto Solve". I'm assuming this
    means calculating the aggregations for upper levels of the hierarchy
    (although this is not mentioned in any of the documentation I have).
    I have two questions related to this:
    1. Is there a way to keep these aggregations from being performed at
    cube build time on dimensions with a value-based hierarchy? I already
    have the one dimension designated as level-based unchecked in the
    "Summarize To" tab in AW manager (TIME dimension).
    2. Are there any other tips that might help me get this cube built?
    Here is the log from the olapsys.xml_load_log table:
    RECORD_ID LOG_DATE AW XML_MESSAGE
    1. 09-MAR-06 SYS.AWXML 08:18:51 Started Build(Refresh) of APSHELL Analytic Workspace.
    2. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Attached AW APSHELL in RW Mode.
    3. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Started Loading Dimensions.
    4. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members.
    5. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    6. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ACCOUNT.DIMENSION. Added: 0. No Longer Present: 0.
    7. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    8. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for CATEGORY.DIMENSION. Added: 0. No Longer Present: 0.
    9. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for DATASRC.DIMENSION (3 out of 9 Dimensions).10. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for DATASRC.DIMENSION. Added: 0. No Longer Present: 0.
    11. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ENTITY.DIMENSION (4 out of 9 Dimensions).
    12. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ENTITY.DIMENSION. Added: 0. No Longer Present: 0.
    13. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    14. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INPT_CURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    15. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INTCO.DIMENSION (6 out of 9 Dimensions).
    16. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INTCO.DIMENSION. Added: 0. No Longer Present: 0.
    17. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RATE.DIMENSION (7 out of 9 Dimensions).
    18. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RATE.DIMENSION. Added: 0. No Longer Present: 0.
    19. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    20. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RPTCURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    21. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for TIME.DIMENSION (9 out of 9 Dimensions).
    22. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Members for TIME.DIMENSION. Added: 0. No Longer Present: 0.
    23. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Dimension Members.
    24. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies.
    25. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    26. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Hierarchies for ACCOUNT.DIMENSION. 1 hierarchy(s) ACCOUNT_HIERARCHY Processed.
    27. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    28. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for CATEGORY.DIMENSION. 1 hierarchy(s) CATEGORY_HIERARCHY Processed.
    29. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for DATASRC.DIMENSION (3 out of 9 Dimensions).
    30. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for DATASRC.DIMENSION. 1 hierarchy(s) DATASRC_HIER Processed.
    31. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for ENTITY.DIMENSION (4 out of 9 Dimensions).
    32. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for ENTITY.DIMENSION. 2 hierarchy(s) ENTITY_HIERARCHY1, ENTITY_HIERARCHY2 Processed.
    34. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INPT_CURRENCY.DIMENSION. No hierarchy(s) Processed.
    36. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INTCO.DIMENSION. 1 hierarchy(s) INTCO_HIERARCHY Processed.
    37. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RATE.DIMENSION (7 out of 9 Dimensions).
    38. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RATE.DIMENSION. No hierarchy(s) Processed.
    39. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    40. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RPTCURRENCY.DIMENSION. No hierarchy(s) Processed.
    41. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for TIME.DIMENSION (9 out of 9 Dimensions).
    42. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for TIME.DIMENSION. 2 hierarchy(s) CALENDAR, FISCAL_CALENDAR Processed.
    43. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies.
    44. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes.
    45. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    46. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ACCOUNT.DIMENSION. 6 attribute(s) ACCTYPE, CALC, FORMAT, LONG_DESCRIPTION, RATETYPE, SCALING Processed.
    47. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    48. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for CATEGORY.DIMENSION. 2 attribute(s) CALC, LONG_DESCRIPTION Processed.
    49. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for DATASRC.DIMENSION (3 out of 9 Dimensions). 50. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for DATASRC.DIMENSION. 3 attribute(s) CURRENCY, INTCO, LONG_DESCRIPTION Processed.
    51. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ENTITY.DIMENSION (4 out of 9 Dimensions).
    52. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ENTITY.DIMENSION. 3 attribute(s) CALC, CURRENCY, LONG_DESCRIPTION Processed.
    53. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    54. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INPT_CURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    55. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INTCO.DIMENSION (6 out of 9 Dimensions).
    56. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INTCO.DIMENSION. 2 attribute(s) ENTITY, LONG_DESCRIPTION Processed.
    57. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for RATE.DIMENSION (7 out of 9 Dimensions).
    58. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RATE.DIMENSION. 1 attribute(s) LONG_DESCRIPTION Processed.
    59. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    60. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RPTCURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    61. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for TIME.DIMENSION (9 out of 9 Dimensions).
    62. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes for TIME.DIMENSION. 3 attribute(s) END_DATE, LONG_DESCRIPTION, TIME_SPAN Processed.
    63. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes.
    64. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Dimensions.
    65. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Started Updating Partitions.
    66. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Updating Partitions.
    67. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Loading Measures.
    68. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE.
    69. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Finished Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE. Processed 100000001 Records. Rejected 0 Records.
    70. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Started Auto Solve for Measures: SIGNEDDATA from Cube FINANCE.CUBE.

    Hi, I've taken a few minutes to do a quick analysis. I just saw in your post that this isn't "real data", but some type of sample. Here is what I'm seeing. First off, this is the strangest dataset I've ever seen. With the exception of TIME, DATASOURCE, and RPTCURRENCY, every single other dimension is nearly 100% dense. Quite truthfully, in a cube of this many dimensions, I have never seen data be 100% dense like this (usually with this many dimensions its more around the .01% dense max, usually even lower than that). Is it possible that the way you generated the test data would have caused this to happen?
    If so, I would strongly encourage you to go back to your "real" data and run the same queries and post results. I think that "real" data will produce a much different profile than what we're seeing here.
    If you really do want to try and aggregate this dataset, I'd do the following:
    1. Drop any dimension that doesn't add analytic value
    Report currency is an obvious choice for this - if every record has exactly the same value, then it adds no additional information (but increases the size of the data)
    Also, data source falls into the same category. However, I'd add one more question / comment with data source - even if all 3 values DID show up in the data, does knowing the data source provide any analytical capabilities? I.e. would a business person make a different decision based on whether the data is coming from system A vs. system B vs. system C?
    2. Make sure all remaining dimensions except TIME are DENSE, not sparse. I'd probably define the cube with this order:
    Account...........dense
    Entity..............dense
    IntCo...............dense
    Category.........dense
    Time...............sparse
    3. Since time is level based (and sparse), I'd set it to only aggregate at the day and month levels (i.e. let quarter and year be calculated on the fly)
    4. Are there really no "levels" in the dimensions like Entity? Usually companies define those with very rigid hierarchies (assuming this means legal entity)
    Good luck with loading this cube. Please let us know how "real" this data is. I suspect with that many dimensions that the "real" data will be VERY sparse, not dense like this sample is, in which case some of the sparsity handling functionality would make a huge benefit for you. As is, with the data being nearly 100% dense, turning on sparsity for any dimension other than TIME probably kills your performance.
    Let us know what you think!
    Thanks,
    Scott

  • Access Oracle 10g OLAP Analytic Worspace from BOE XI for Universes creating

    Hi skilled!
    Is there any environment to access  Oracle 10g OLAP Analytic Worspace
    from BOE XI (not from BOE XIR2) for Universes creating ?
    Thank you.

    Hello Walter,
    Thanks for the response.
    I'm using the following SAP GUI: 710 Final Release, Version 7100.2.7.1038, Build 971593, Patch-Level 7.
    Trying to install the Desktop part of the integration kit results in the follwoing message: Unable to perform Desktop installation. The installation is unable to proceed with a desktop installation because Crystal Reports 2008 was not detected. To perform a Desktop installation, it must be installed.
    Without having installed any integration kit on the client side, I can still see the data access driver for SAP Business Warehouse 3.x in the Universe Designer (when trying to define a new connection). After providing the respective parameters, however, a connection is not established successfully. When testing the connection it just tells me that the connection trial failed (SBO0001), without providing any further details.
    What do you think, is the problem? The SAP GUI? Is it at all required for my purposes (generating OLAP universes in Designer from SAP BI) to install the integration kit on the client side? As far as I have read, it is?
    Thanks, Konrad

  • Recommended Hardware Config for huge OLAP Cube build

    Hi David ,
    Can you please provide the recommended hardware config for cube having billions of data under fact table . We ran a cube with 0.1 billion of data took around 7 hours to process. What could be the key areas to gain the performance benefit ? ansd also what could be the CPU , RAM (server) , RAM (Oracle DB) to gain much more perf benefit in such configurations ?
    Also we have 32 bit windows 2003 server .Can we get better result if we switch to 64 bit ?
    Thanks in advance,
    DxP.

    Hi!
    Well, I would definitely recommend you to proceed with a consultant because I feel that you have some lack of methodology and experience together with time constraints for your project.
    Regarding hardware, unfortunately, I would not be able to give you any precise figures because I have no supporting information. What you should bear in mind that your system must be balanced. You (better with consultant) need to find a right balance between all the factors you consider as important while building a pile of hardware:
    - cost
    - architecture
    - speed
    - availability
    - load scenarios
    etc...
    Regarding architecture point bear in mind that today finding right balance between processing power and storage processing capacity is a challenge. Put attention to this.
    Regards,
    Kirill

  • Oracle 11g OLAP & SQL

    Hi All
    Our company is in the process of doing a POC warehouse where we are using Oracle OLAP extensively for summary management. I have been tasked with porting all our existing reports (Cognos) from using an Informix backend to start using Oracle. The OLAP team has created some cube views for me but Im struggling to get my around how Im going to use them for reporting purposes.
    Example
    1) Im using the following sql (abbreviated) to get my data:
    select
    v_product.product_description.
    v_product.level_name,
    v_sales.sales,
    v_sales.calc_measure
    from v_product, v_sales, v_location, v_time
    where ... all the joins....
    v_product.level_name in ('DEPARTMENT', 'CLASS')
    and v_location.level_name = 'TOTAL'
    and v_time.level_name = 'TOTAL'
    2) This brings back data that looks like:
    product_description level_name sales calc_measure
    MEAT DEPARTMENT 232323 23.56
    POULTRY DEPARTMENT 43444 35.23
    BEEF CLASS 232323 23.56
    CHICKEN CLASS 67455 35.23
    LAMB CLASS 73444 23.56
    PORK CLASS 55555 35.23
    3) I need to create a list report thats grouped by department and for each department shows all the classes but off the data above is very difficult. I cannot just select the all the class values and then do the aggregation in the report as there is a calculated measure so I need to select the value for that level from the cube view. Is it possible in one sql statement or will I need more?
    Thanks for any ideas

    Dave thanks for your reply. Please excuse my poor example this was my first day using cube views and I cannot login to my work setup from home so going by memory alone.
    To answer your question
    1. We are using 11g AW
    2. I don't remember the exact cube view names but not relevant to my question (I think)
    3. Alas the oracle forums don't support much formatting else I would have provided an ascii example. I have uploaded the sample report output here http://i279.photobucket.com/albums/kk145/angusgoosemiller/sample.gif
    Better Example
    1. From what I can gather if your query more that one level in the same dimension from a cube view you get the results denormalized as rows. So effectively for my report I what the department and class levels from the product hierarchy where class is a child of department and some relevant measures one of which is a calculated measure. If I select this from the cube view I am getting results in the form:
    DEPARTMENT LEVEL ... row values
    DEPARTMENT LEVEL ... row values
    CLASS LEVEL ... row values
    CLASS LEVEL ... row values
    2) My report is a list report that is grouped by department and for each department all the class records are displayed with the measures. There must also be a department total for every department level and a grand total for the report. If the calculated measure was not included I could just return all the class records as the there is a department attribute defined that is also in the cube view and calculate all the department values dynamically in the report. However due to the calculated measure and probably as a best practice from a performance/redundancy perspective I only want to select from the cube view in its aggregated form like is currently happening.
    3) From a report design perspective this provides some challenges as relationally hierarchy levels are normally modeled as columns and we use to process calculated measures dynamically in the report. Going forward we would like all the calculations etc happening in the OLAP engine.
    4) So basically the way I see it I need the following from the cube:
    4.1) The department records
    4.2) The class records
    4.3) the department total records
    4.4) the grand total record for all departments
    Can I get that in one SQL statement in such a manner that I can produce a report? How would an oracle based reporting solution get the data, via sql or directly from the cube via the olap api.
    Thanks for your help I really appreciate any advice!
    Cheers
    Angus

  • ENT-06954: Error while Displaying BIBean for Cube/Dimension Dataviewer.

    Hi all,
    I have defined a cube with three dimensions. All elements are deployed and the mappings is executed successfully. If I try to open the dataviewer either for the cube or for the dimensions I receive the following errors:
    ENT-06954: Error while Displaying BIBean for Cube Dataviewer.
         at oracle.wh.ui.owbcommon.dataviewer.dimensional.CubeDataViewerMain.getDataviewObject(CubeDataViewerMain.java:391)
         at oracle.wh.ui.owbcommon.dataviewer.dimensional.CubeDataViewerEditor._init(CubeDataViewerEditor.java:66)
         at oracle.wh.ui.editor.Editor.init(Editor.java:1115)
         at oracle.wh.ui.editor.Editor.showEditor(Editor.java:1431)
         at oracle.wh.ui.owbcommon.IdeUtils._tryLaunchEditorByClass(IdeUtils.java:1431)
         at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1344)
         at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1362)
         at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:864)
         at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:851)
         at oracle.wh.ui.console.commands.DataViewerCmd.performAction(DataViewerCmd.java:19)
         at oracle.wh.ui.console.commands.TreeMenuHandler$1.run(TreeMenuHandler.java:188)
         at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:178)
         at java.awt.EventQueue.dispatchEvent(EventQueue.java:454)
         at java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:201)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:145)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:137)
         at java.awt.EventDispatchThread.run(EventDispatchThread.java:100)
    ENT-06972: Error while Displaying BIBean for Dimension Dataviewer.
         at oracle.wh.ui.owbcommon.dataviewer.dimensional.DimDataViewerMain.getDimensionListObject(DimDataViewerMain.java:479)
         at oracle.wh.ui.owbcommon.dataviewer.dimensional.DimDataViewerEditor._init(DimDataViewerEditor.java:89)
         at oracle.wh.ui.editor.Editor.init(Editor.java:1115)
         at oracle.wh.ui.editor.Editor.showEditor(Editor.java:1431)
         at oracle.wh.ui.owbcommon.IdeUtils._tryLaunchEditorByClass(IdeUtils.java:1431)
         at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1344)
         at oracle.wh.ui.owbcommon.IdeUtils._doLaunchEditor(IdeUtils.java:1362)
         at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:864)
         at oracle.wh.ui.owbcommon.IdeUtils.showDataViewer(IdeUtils.java:851)
         at oracle.wh.ui.console.commands.DataViewerCmd.performAction(DataViewerCmd.java:19)
         at oracle.wh.ui.console.commands.TreeMenuHandler$1.run(TreeMenuHandler.java:188)
         at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:178)
         at java.awt.EventQueue.dispatchEvent(EventQueue.java:454)
         at java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:201)
         at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:151)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:145)
         at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:137)
         at java.awt.EventDispatchThread.run(EventDispatchThread.java:100)
    In another warehouse this function works properly. I am using OWB 10gR2 (10.2.0.1.31).
    Any hints to fix this issue would be appreciated.
    Thanks in Advance, C.

    If you run OWB using the supplied shell script or batch file are there other errors on the console before the exception. OWB uses a component (OLAPI does some validation over and above any other) which dumps the errors on the the standard output which often illustrates some invalid metadata in the OLAP catalog.

  • Hybrid OLAP model in Oracle 9i OLAP

    Please, does anybody know if with Oracle 9i OLAP Technology is possible to create an hybrid model? In Express I've used the RAM for mapping dimensions and fact tables with Express multidimensional cubes, so when I ran queries (i.e. OSA queries) the RAM/Express engine knew dynamically what data to retrieve from the fact tables (i.e. detail data) and what data to retrieve from the Express cubes (i.e. aggregated data) in a hidden way for the end user.
    So, in Oracle 9i OLAP which is in charge of accomplishing the RAM function? and if this functionality exists, could the Analytic Workspace Manager create the metadata of this hybrid model?
    Secondly, OWB 9.0.4 is capable of creating the OLAP metadata from the result of a fact load mapping; do you think that the OWB future releases would allow to create the hybrid metadata?, for later to be exploited from BI Beans?
    Thanks a lot in advance,
    Pablo Ibarra.

    Support for a hybrid model will be implemented in a future release.
    Warehouse Builder, as a design and implementation platform, will continue to evolve as OLAP evolves.

  • Oracle 10g OLAP to 11g OLAP upgrade ?

    We currently are planning on a upgrade from 10g OLAP => 11g OLAP. We currently have 12 AWM's in 10g OLAP which we need to move over with associated DML programs and additional SQL reporting views.
    Questions:
    1. Is there any documentation available on necessary steps for 10g OLAP => 11g OLAP upgrade?
    2. What would happen to existing AWM's prepared in 10g, would they be migrated to 11g transparently or have to be re-created?
    3. Is there any specific documentation related to changes in way cube builds are done programatically?
    4. Any changes in the way limit maps work in 11g?
    Please advise.
    Thanks,
    Sudip

    Migrating a 10g cube to 11g depends on whether you are talking about 11gR1 or 11gR2. 10g cubes continue to operate in the "10g way" even after a database upgrade. They will not become "11g cubes" until they are rebuilt after the database upgrade. In 11gR2, there is a supported way to migrate 10g cubes to 11g cubes, both with AWM and with PL/SQL. Sorry to say... this functionality doesn't exist in 11gR1: you'll have to rebuild your cubes from the ground up.
    The SQL relational views built using the AWM plugin in 10g are no longer applicable in 11g. That's because OLAP cubes in 11g are registered in the oracle data dictionary (just like other Oracle objects), and the SQL relational views are managed in the database and a recognized part of the product. The SQL relational views are quite different in 11g, so you will likely have to rewrite queries against them.
    LIMIT map syntax is the same, but performance is much better.
    See if this blog entry helps:
    http://www.rittmanmead.com/2009/10/09/olap-10gr2-and-dense-looping/
    Edited by: Stewart Bryson on Feb 19, 2010 8:40 AM

  • BI beans does not use QUERY RERWITE for cube

    Hello!
    First of all I would like to say big thanks to Keith for help on dimension rollup. It works now.
    I am creating a pilot environment with
    Oracle      10.2.0.3
    OWB 10.2.0.3
    SS Add-in 10.1.2.2
    and one cube ST_R and three dims
    DIM_R_NOMA for products
    DIM_R_CIDI for stores
    DIM_R_TIME for time
    Now I have deployed dimensions and cube to CWM2. Dimensions are working quite well. I would say even better than in MOLAP. About 1 second for rollup on 2000 members in level.
    Now I am facing second problem. I can not force BI beans (which is used in SS Add-in and Disco and OWB browser) to use summaries prepared by DBMS_ODM package.
    1/ I have prepared MV LOGS for dimension and fact tables
    2/ I have prepared MV for dimensions using DBMS_ODM
    3/ I've prepared materialized view for cube aggs. with
    DBMS_ODM.CREATESTDFACTMV('WWHH','ST_R','ST_R.sql','C:\TEMP',true,'FULL');
    CWM2_OLAP_CUBE.set_mv_summary_code('WWHH','ST_R','GROUPINGSET');
    as described in OLAP REFERENCE on DBMS_ODM page.
    I explained all my MVs - they seem to be OK. They support
    REWRITE_GENERAL,
    REWRITE_FULL_TEXT,
    REWRITE_PART_TEXT
    There are no support for PCT rewrite etc.
    My user 'WWHH' has a privileges:
    ANALYSE_ANY
    QWERY_REWRITE
    GLOBAL_QWERY_REWRITE
    My database has setting
    QUERY_REWRITE_ENABLED=true
    Stale_tolerated=enforced
    all MVs and tables are analyzed.
    I do not use parallel settings on tables,MVs.
    to do some further analisys I've enabled olapcontinuous_trace_file. It generates some useful SQL statements from BI BEANS in UDUMP. These statements DO NOT resolve in MAT VIEW in explain plan
    Questions:
    Are there any settings for BI BEANS to turn on/off?
    Are there any other packages to create MVs?
    How to explain WHY MVs are not used?
    THanks everybody for cooperation.
    Regards,
    Kirill Boyko

    Keith,
    Thank you for response.
    I refreshed metadata, but no result. Again, dimensions are OK, but not cube. I am attaching explanation why it did no rewrite. This explanation is coming from
    dbms_mview.Explain_Rewrite
    QSM-01150: query did not rewrite
    QSM-01263: query rewrite not possible when query references a dictionary table or view
    QSM-01284: materialized view MV1125 has an anchor table DIM_R_TIME_V not found in query
    QSM-01102: materialized view, MV1125, requires join back to table, DIM_R_CIDI_V, on column, REGION_ID
    QSM-01219: no suitable materialized view found to rewrite this query
    QSM-01284: - is wrong. I have DIM_R_TIME_V in MView.
    Here is my MVIEW script from database:
    SELECT
    FROM
    WWHH.CAWHTB2 a,
    WWHH.DIM_R_CIDI_V b,
    WWHH.DIM_R_NOMA_V c,
    WWHH.DIM_R_TIME_V d
    WHERE
    b.STORE_ID = a.CIIDCA AND
    c.ARTICLE_ID = a.NMIDCA AND
    d.DAY_ID = a.CLIDCA
    GROUP BY GROUPING SETS ( ...)
    Could you tell me where to check in metadata that MVIEW is correctly setup?

  • Oracle discoverer olap and 11g olap

    Does anyone know if i can use disco plus olap against an 11g database cube. currently with my patch level it definately doesn't work as i cantwven select a catalog to report on but if i add patches will it work? and is it certified?
    i have read the following and it still isn't clear
    Doc ID: 808276.1
    and
    Doc ID: 727948.1
    please help
    Thanks

    Oracle makes it a general policy not to announce things in future releases, even bug fixes, without due approval from senior management. This puts Oracle developers in the unfortunate position of not being able to pass on information in forums like this, which is why you are not able to get a straight answer to your very reasonable question.
    It is true that full support and certification for D4O and the Excel Add-In did not make the 11.1 release. This was for two reasons.
    (1) On the OLAP Option side there were so many changes to the metadata layer in 11.1 that there simply was not time to fix all the bugs. As I said earlier, D4O basically works against 11.1, but the OLAP group did feel confidence in its stability and performance. I can say that many D4O related OLAP bugs were logged and targeted for 11.2 and that stability and quality is the number one goal for the 11.2 OLAP release.
    (2) The teams that worked on Discoverer classic, Discoverer for OLAP, and the Excel Add-In were moved on to new projects following the acquisition of Siebel. You can see the statement of direction here .
    Of these clearly the latter is the bigger problem. Oracle OLAP itself is committed to maintaining support for existing interfaces and the OLAP API (used by D4O) has not been deprecated. In fact it was the basis of much of the work in 11g.

  • OLAP error leading to  a corrupted AW during cube build

    Hi all
    My cube load keeps on crashing and the result is a corrupted AW. Sometimes it happens, sometimes it doesn't, thus it is very hard to pin point the cause of the problem
    I would like to emphasize that the Alert log and trace files contain general info about the issue but does not help in resolving the problem.
    Let me explain what is happening.
    I have 6 cubes A, B, C, D , E and F in "MY_AW" and all the cubes are populated via a single dbm_cube.build call as shown below
    DBMS_CUBE.BUILD('CUBE_A, CUBE_B, CUBE_C, CUBE_D, CUBE_E , CUBE F', 'C', FALSE, 5, TRUE, TRUE, FALSE);
    Here is a brief description of the cube structure
    a) CUBE_A, CUBE_B CUBE_C are partitioned along dimension A_A ( i.e. along the same dimension)
    b) CUBE_D is partitioned along dimension B_B ( i.e. along a different dimension)
    c) CUBE_E and CUBE_F' are NOT partitioned ( These cubes are really tiny, thus no need partitioning them)
    All 6 cubes are loaded on a daily basis. Initial reload load is always without any issues. Then the next , say 3 incremental loads would run successfully too.
    After that, any further incremental load generates an Oracle error ( as shown below) and the result is a corrupted AW.
    It is really frustrating because the error is inconsistent( i.e does not surface always) , although the cube structure is constant across all loads..
    Can anybody figure out what is happening here?
    Any help would be highly appreciated.
    ERROR at line 1:
    ORA-37162: OLAP error
    XOQ-01707: Oracle job "JOB$_7041" failed while executing slave build "*CUBE_F* USING (LOAD NO SYNCH, SOLVE) AS OF SCN 11027959160835" with error "37162: ORA-37162: OLAP error
    XOQ-00703: error executing OLAP DML command "(AW ATTACH MY_AW RO : ORA-01403: no data found )"
    ORA-06512: at "SYS.DBMS_CUBE", line 234
    ORA-06512: at "SYS.DBMS_CUBE", line 316
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_CUBE", line 234
    ORA-06512: at "SYS.DBMS_CUBE", line 287
    ORA-06512: at line 3
    Edited by: user9018701 on Jun 11, 2011 4:50 AM

    One recent source of AW corruption is
    BUG 8664896 - ORA-20000: ORU-10027: BUFFER OVERFLOW - AW VALIDATE
    A characteristic error of this bug (which you may see in the alert log) is
    +ORA-00600: internal error code, arguments: [xspggepGenPSErase05], [], [], []+
    This is fixed in the most recent 11g patches, as described by metalink note 1078454.1, so apply these may clear up the issue.
    I also see that you selecting the 'atomic' option in your call to dbms_cube.build. This means you may be a victim of
    BUG 11795047 - USING OLAP "ROLLBACK TO FREEZE" CAN DAMAGE AW
    The issue is that if you choose to run an atomic build and any error occurs during a slave process, then your AW may end up being corrupted. Choosing FALSE for the atomic argument would not fix the underlying error, but it may stop the subsequent AW corruption. As of writing the fix for this bug is not part of any public OLAP patch, so you would need to get a one-off patch through a service request.
    In either case I think a service request would be appropriate here since this is a serious error. You can mention my name if you like and I will help direct the problem to the appropriate developers.

  • FAST refresh not working for Cube & Dimensions in AWM

    Hi,
    My doubt is regarding refreshing cube/dimension using FAST refresh method in AWM     
    1     My dimension (MVIEW refresh enabled in AWM) is refreshed without an error when I pass the refresh method parameter as *'F'* in DBMS_CUBE.BUILD() script, although there is no MVIEW log present on the dim table.
         In ALL_MVIEWS.LAST_REFRESH_TYPE, a *'COMPLETE'* refresh is logged.
    2. My CUBE doesn't allow to select refresh_type=FAST when there is no MVIEW log built.
         The same CUBE (MVIEW refresh enabled, refresh_type=FAST in AWM) throws following error even when I create Mview logs for all fact and dimension tables in the DB.
    java.lang.NullPointerException
    at oracle.olap.awm.dataobject.DatabaseDO.commitOLAPI(Unknown Source)
    at oracle.olap.awm.dataobject.aw.WorkspaceDO.commitOLAPI(Unknown Source)
    at oracle.olap.awm.dataobject.olapi.UModelDO.commitOLAPI(Unknown Source)
    at oracle.olap.awm.dataobject.olapi.UModelDO.update(Unknown Source)
    at oracle.olap.awm.dataobject.olapi.UCubeDO.update(Unknown Source)
    at oracle.olap.awm.dataobject.dialog.PropertyViewer.doApplyAction(Unknown Source)
    at oracle.olap.awm.dataobject.dialog.PropertyViewer$1ApplyThread.run(Unknown Source)     
    If I continue with this error, CUBE mview vanishes from the DB.
    Please help - How to do a FAST refresh for CUBE and Dimensions.
    Edited by: 861346 on May 26, 2011 12:12 AM
    Edited by: 861346 on May 26, 2011 12:13 AM
    Edited by: 861346 on May 26, 2011 12:14 AM

    If your object is to process the cube as quickly as possible, MV refresh of the cube is probably not required. As an alternative, you can do the following:
    - Map the cube to a view and use a filter control what data is presented to the cube during a refresh.
    - Avoid dimension maintenance (adding new members, dropping members, changing parent-child relationships).
    Let's say you update your cube daily with sales data coming from 10,000 stores. You could add a LAST_UPDATED column to the source fact table, timestamp rows whenever the fact table is updated and then filter on that column in a view to present only the new or changed records to the cube (or whatever filtering scheme you might like).
    Dimensions are always entirely processed (compiled) regardless of what data is presented to them, so there isn't any advantage to timestamping the records in the dimension table and filtering on them. What is important to understand is that any change to a hierarchy (adding members, deleting members, changing parentage) will trigger re-aggregation of the cube. If you can batch those changes periodically you can limit how much of the cube is processed during a refresh.
    Continuing with the example of the daily update of the sales cube, we can examine two scenarios. In both cases, the cube is partitioned by month and a fact view filters for only the new or updated fact records (let's say there are new records every day).
    Scenario 1
    New records are added to the sales fact table and new stores are added to the store dimension table each day. The store dimension will be updated with new stores, new records will be loaded from the fact table and all partitions will be processed (loaded and solved/aggregated).
    Scenario 2
    New records are added to the fact table, but new stores loaded into the store dimension only once a week (e.g., Saturday). The fact view filters for only new or changed records and stores that currently exist in the store dimension. For Sunday through Friday, new or changed records will be loaded from the fact table and only those partitions in the cube that have new or updated data will be solved / aggregated. (If there are no changes to hierarchies and no records are loaded into a partition, that partition is not solved / aggregated). On Saturday, new stores are added to the store dimension table and the store dimension and the cube are updated. Because the store dimension has changed, all partitions of the will be processed.
    With scenario 1, data for new stores are available each day but the entire cube might be solved each day (if there are new stores). In scenario two, new stores are not available until Saturday but the processing of the cube will be limited to only those partitions where there is new fact data.

  • From where i can downlaod AWM for cubes creation

    Hi experts
    From where i can download AWM for cube creation.
    Thanks in advance
    Regards
    Frnds

    See the "Downloads" portlet on this page: http://www.oracle.com/technology/products/bi/olap/index.html

  • Oracle 10g OLAP option

    Hi all
    Do we support creating a universe against an Oracle 10g OLAP database and how do we do this to report against OLAP data?
    I notice that there isn't an OLAP universe option for this, so I presume that if we can then we do this via OLAP SQL expressions?
    Thank you for your help and kind regards,
    Deam

    Hi,
    You are right, we are supporting Oracle OLAP 10g using SQL.
    In fact we provide an option in Universe Designer where you can automatically create an Oracle view based on an AW (Analytical Workspace) and then generate the universe:<ul><li>Select "Metadata exchange" menu option from "File" menu
    <li>Then select "Oracle OLAP 9i/10G/11g" option
    </ul>You can therefore customize the universe by adding objects, filters, measures, etc.
    You can also add other tables in the scema in order to provide drill-though capabilities.
    Regards,
    Didier

Maybe you are looking for

  • [Solved] Cannot poweroff/reboot in urxvt

    Hi all, I'm running StumpWM with no desktop environment, and in the past was able to shut down the machine from within an xterm as a regular user using 'systemctl poweroff' (or systemctl reboot to reboot, etc). I recently started using urxvt in daemo

  • How to prepending a where clause column

    Hello,  Is there a way to accomplish prepending a '1'  to a column in the where criteria?  Of course this gives a query gives an invalid column error select count(*) FROM     ml_journals     a,          ml_pay_headers_all  b,          emp_master     

  • Update using CASE statement

    Hi I have two tables employee empid empname empaddress valid Employeedetails empid empname manager I would like to validate empname from both the tables using case statement and empid as join and update 'valid' column in employee table to y/n. Please

  • Please ..... Help with Threads

    I got the same problem that sun_fade has which is related to running a program behind the scene ..... this the topic that sun_fade issued ((((((Am currently working on a small java project which run through a unix terminal ... my program take some ti

  • HT4972 iPad2 - why do I not see software update under settings, general?  I am trying to update to ISO5

    iPad 2 I am trying to update to ISO5;however, when I go to setting, general, I do not have software update as directions show on apple support.