Recommended Hardware Config for huge OLAP Cube build

Hi David ,
Can you please provide the recommended hardware config for cube having billions of data under fact table . We ran a cube with 0.1 billion of data took around 7 hours to process. What could be the key areas to gain the performance benefit ? ansd also what could be the CPU , RAM (server) , RAM (Oracle DB) to gain much more perf benefit in such configurations ?
Also we have 32 bit windows 2003 server .Can we get better result if we switch to 64 bit ?
Thanks in advance,
DxP.

Hi!
Well, I would definitely recommend you to proceed with a consultant because I feel that you have some lack of methodology and experience together with time constraints for your project.
Regarding hardware, unfortunately, I would not be able to give you any precise figures because I have no supporting information. What you should bear in mind that your system must be balanced. You (better with consultant) need to find a right balance between all the factors you consider as important while building a pile of hardware:
- cost
- architecture
- speed
- availability
- load scenarios
etc...
Regarding architecture point bear in mind that today finding right balance between processing power and storage processing capacity is a challenge. Put attention to this.
Regards,
Kirill

Similar Messages

  • What is recommended hardware requirement for central SLD?

    We plan to have cental SLD for the entire landscape(Small size-ECC 6.0, EP and BI).
    So what is the recommended hardware requirement for this SLD?

    Hello Sanjai,
    Please use the following link to find the complete information about System Landscape Directory. (My recommendation is read the Planing Guide for SLD).
    System Landscape Directory (SLD) the SLD paper to develop your strategy.
    Hope it helps.
    Regards,
    Satish.

  • Recommended hardware specs for Reader in Android

    What is the recommended hardware specs for Reader in Android? I would like to have smooth scrolling experience for scanned pdf ebooks with mathematical equations, images and graphs. Also keep the price low on my new tablet. Thank you.

    Getting one of the latest devices would obviously give you the best performance, but any device which supports Android 4 or above should be good enough.

  • Project Server 2013 OLAP cube build error

    When I try to build the OLAP cube I receive the below error:
    [2/26/2014 12:36 PM] Failed to build the OLAP cubes. Error: The attempt to build the OLAP database on server SQL_Server_Name failed, the SQL Server Analysis Services Analysis Management Objects (AMO) client software may not be installed on this server, please
    install/update the client on this server and retry. The underlying exception was: Could not load file or assembly 'Microsoft.AnalysisServices, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot
    find the file specified.
    I did some search and found that Analysis Management Objects (AMO) client software is not installed on my servers, I downloaded and installed the AMO client software , restarted the Project Server Service application to the 4 Servers. I rebooted the
    4 servers, tried to build the cube, but I got the same error. I checked the assembly on the 4 servers and they contain the dll Microsoft.AnalysisServices.
    Any help?
    I read that AMO client software tools must be installed during the farm setup, after installing the SharePoint Prerequisites and before installing SharePoint. Is that correct?

    Hello,
    Even with SQL 2012, the 2008 SQL AMO component is required on the Project Server servers. See the post from Brian for more details:
    http://blogs.msdn.com/b/brismith/archive/2012/11/12/project-server-2013-requirements-to-build-an-olap-cube.aspx
    Paul
    Paul Mather | Twitter |
    http://pwmather.wordpress.com | CPS

  • Oracle OLAP cube build question

    Hello,
         I am trying to build a reasonably large cube (around 100
    million rows from the underlying relational fact table). I am using
    Oracle 10g Release 2. The cube has 7 dimensions, the largest of which
    is TIME (6 years of data with the lowest level day). The cube build
    never finishes.
    Apparently it collapses while doing "Auto Solve". I'm assuming this
    means calculating the aggregations for upper levels of the hierarchy
    (although this is not mentioned in any of the documentation I have).
    I have two questions related to this:
    1. Is there a way to keep these aggregations from being performed at
    cube build time on dimensions with a value-based hierarchy? I already
    have the one dimension designated as level-based unchecked in the
    "Summarize To" tab in AW manager (TIME dimension).
    2. Are there any other tips that might help me get this cube built?
    Here is the log from the olapsys.xml_load_log table:
    RECORD_ID LOG_DATE AW XML_MESSAGE
    1. 09-MAR-06 SYS.AWXML 08:18:51 Started Build(Refresh) of APSHELL Analytic Workspace.
    2. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Attached AW APSHELL in RW Mode.
    3. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Started Loading Dimensions.
    4. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members.
    5. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    6. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ACCOUNT.DIMENSION. Added: 0. No Longer Present: 0.
    7. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    8. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for CATEGORY.DIMENSION. Added: 0. No Longer Present: 0.
    9. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for DATASRC.DIMENSION (3 out of 9 Dimensions).10. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for DATASRC.DIMENSION. Added: 0. No Longer Present: 0.
    11. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ENTITY.DIMENSION (4 out of 9 Dimensions).
    12. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ENTITY.DIMENSION. Added: 0. No Longer Present: 0.
    13. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    14. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INPT_CURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    15. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INTCO.DIMENSION (6 out of 9 Dimensions).
    16. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INTCO.DIMENSION. Added: 0. No Longer Present: 0.
    17. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RATE.DIMENSION (7 out of 9 Dimensions).
    18. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RATE.DIMENSION. Added: 0. No Longer Present: 0.
    19. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    20. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RPTCURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    21. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for TIME.DIMENSION (9 out of 9 Dimensions).
    22. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Members for TIME.DIMENSION. Added: 0. No Longer Present: 0.
    23. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Dimension Members.
    24. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies.
    25. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    26. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Hierarchies for ACCOUNT.DIMENSION. 1 hierarchy(s) ACCOUNT_HIERARCHY Processed.
    27. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    28. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for CATEGORY.DIMENSION. 1 hierarchy(s) CATEGORY_HIERARCHY Processed.
    29. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for DATASRC.DIMENSION (3 out of 9 Dimensions).
    30. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for DATASRC.DIMENSION. 1 hierarchy(s) DATASRC_HIER Processed.
    31. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for ENTITY.DIMENSION (4 out of 9 Dimensions).
    32. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for ENTITY.DIMENSION. 2 hierarchy(s) ENTITY_HIERARCHY1, ENTITY_HIERARCHY2 Processed.
    34. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INPT_CURRENCY.DIMENSION. No hierarchy(s) Processed.
    36. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INTCO.DIMENSION. 1 hierarchy(s) INTCO_HIERARCHY Processed.
    37. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RATE.DIMENSION (7 out of 9 Dimensions).
    38. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RATE.DIMENSION. No hierarchy(s) Processed.
    39. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    40. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RPTCURRENCY.DIMENSION. No hierarchy(s) Processed.
    41. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for TIME.DIMENSION (9 out of 9 Dimensions).
    42. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for TIME.DIMENSION. 2 hierarchy(s) CALENDAR, FISCAL_CALENDAR Processed.
    43. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies.
    44. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes.
    45. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    46. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ACCOUNT.DIMENSION. 6 attribute(s) ACCTYPE, CALC, FORMAT, LONG_DESCRIPTION, RATETYPE, SCALING Processed.
    47. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    48. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for CATEGORY.DIMENSION. 2 attribute(s) CALC, LONG_DESCRIPTION Processed.
    49. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for DATASRC.DIMENSION (3 out of 9 Dimensions). 50. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for DATASRC.DIMENSION. 3 attribute(s) CURRENCY, INTCO, LONG_DESCRIPTION Processed.
    51. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ENTITY.DIMENSION (4 out of 9 Dimensions).
    52. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ENTITY.DIMENSION. 3 attribute(s) CALC, CURRENCY, LONG_DESCRIPTION Processed.
    53. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    54. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INPT_CURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    55. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INTCO.DIMENSION (6 out of 9 Dimensions).
    56. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INTCO.DIMENSION. 2 attribute(s) ENTITY, LONG_DESCRIPTION Processed.
    57. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for RATE.DIMENSION (7 out of 9 Dimensions).
    58. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RATE.DIMENSION. 1 attribute(s) LONG_DESCRIPTION Processed.
    59. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    60. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RPTCURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    61. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for TIME.DIMENSION (9 out of 9 Dimensions).
    62. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes for TIME.DIMENSION. 3 attribute(s) END_DATE, LONG_DESCRIPTION, TIME_SPAN Processed.
    63. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes.
    64. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Dimensions.
    65. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Started Updating Partitions.
    66. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Updating Partitions.
    67. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Loading Measures.
    68. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE.
    69. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Finished Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE. Processed 100000001 Records. Rejected 0 Records.
    70. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Started Auto Solve for Measures: SIGNEDDATA from Cube FINANCE.CUBE.

    Hi, I've taken a few minutes to do a quick analysis. I just saw in your post that this isn't "real data", but some type of sample. Here is what I'm seeing. First off, this is the strangest dataset I've ever seen. With the exception of TIME, DATASOURCE, and RPTCURRENCY, every single other dimension is nearly 100% dense. Quite truthfully, in a cube of this many dimensions, I have never seen data be 100% dense like this (usually with this many dimensions its more around the .01% dense max, usually even lower than that). Is it possible that the way you generated the test data would have caused this to happen?
    If so, I would strongly encourage you to go back to your "real" data and run the same queries and post results. I think that "real" data will produce a much different profile than what we're seeing here.
    If you really do want to try and aggregate this dataset, I'd do the following:
    1. Drop any dimension that doesn't add analytic value
    Report currency is an obvious choice for this - if every record has exactly the same value, then it adds no additional information (but increases the size of the data)
    Also, data source falls into the same category. However, I'd add one more question / comment with data source - even if all 3 values DID show up in the data, does knowing the data source provide any analytical capabilities? I.e. would a business person make a different decision based on whether the data is coming from system A vs. system B vs. system C?
    2. Make sure all remaining dimensions except TIME are DENSE, not sparse. I'd probably define the cube with this order:
    Account...........dense
    Entity..............dense
    IntCo...............dense
    Category.........dense
    Time...............sparse
    3. Since time is level based (and sparse), I'd set it to only aggregate at the day and month levels (i.e. let quarter and year be calculated on the fly)
    4. Are there really no "levels" in the dimensions like Entity? Usually companies define those with very rigid hierarchies (assuming this means legal entity)
    Good luck with loading this cube. Please let us know how "real" this data is. I suspect with that many dimensions that the "real" data will be VERY sparse, not dense like this sample is, in which case some of the sparsity handling functionality would make a huge benefit for you. As is, with the data being nearly 100% dense, turning on sparsity for any dimension other than TIME probably kills your performance.
    Let us know what you think!
    Thanks,
    Scott

  • Minimum/Recommended Hardware Requirements for DHCP on Windows 2012 R2

    Hello,
    I am in the process of building two VM's on Windows Server 2012 R2 in-order to run DHCP with Failover.
    Before I build my VM Servers I would like to know if there are Recommended Hardware specs that these servers should be built with?
    I've been looking around for a little while and can't seem to find anyone or anywhere that these requirements are posted.
    Any help would be greatly apreciated!
    Thanks,
    André

    Totally agree with Henrik that DHCP is a very lightweight application and does not require much hardware.  In fact, it is often deployed as another role on a Domain Controller.  The thinking there is that the DC is something you want to have available
    all the time, just like DHCP and DNS.  Why not put them on the same box.
    My only difference in suggestion about hardware recommendations would be to use the 2012 recommendations instead of the 2008 recommendations, though they are pretty much the same -
    http://technet.microsoft.com/en-us/library/dn303418.aspx
    . : | : . : | : . tim

  • Recommended Hardware, OS for Web Cache

    We are planning to implement web cache for our 9i AS (Sun Solaris) environment. The Web cache server will be running on its own box. Also, we would like to perform load balance on our web cache tier down the road.
    Keeping all this in mind. Could someone provide us the recommended hardware (/w cofiguration) and OS for implementing web cache. Thanks,

    Web Cache is designed to run on low-cost commodity hardware. You need at most 2 CPUs (the faster the better) to run
    a Web Cache instance. A single CPU machine is OK, too -- again, the faster the better.
    For the money, you can't beat a 2x933 Linux box.
    Then there's memory. Most 32-bit OSs provide 2GB of addressable memory. You want to make sure there's enough
    memory to cache all the content you want to cache. Usually, 2GB is more than enough. 500MB - 1GB is usually sufficient.
    Depends on your app.
    In a few days, I will post a paper on http://otn.oracle.com/products/ias/web_cache/
    that covers the best practices for sizing and performance of Web Cache 2.0.0.x (9iAS 1.0.2.2). This will help you with
    sizing.
    The network may also be a consideration for performance, as the paper mentioned above discusses.
    Readers shoud note that the information in this standalone paper has been moved to the Performance and Tuning Guide
    for 9iAS release 2 (version 9.0.2) which is due out shortly. Some of the information will be different in Web Cache 9.0.2 than
    it was in Web Cache 2.0.0.x.

  • Hardware Config for MI server!

    Hi All,
    I need information regarding the hardware configuration for the MI server which should deploy SAP Netweaver Mobile 7.1 and support xMAU 3.0 SP5.
    Can anyone throw some light on this.
    Thanks in advance!
    Kanwar

    Hi Kanwar,
    Check this link in SMP for MAU specific information.
    https://websmp105.sap-ag.de/~form/sapnet?_SCENARIO=01100035870000000202&_SHORTKEY=01100035870000694050
    For general MI queries, please check the MI FAQ section https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cc489997-0901-0010-b3a3-c27a7208e20a
    Regards
    Ajith

  • INSTALL_OLAPTRAIN FOR CREATING OLAP CUBES

    Hi all;
    I do a project and I finished Oracle dat mining part. Now, I want to apply OLAP cubes application. Firstly, I need to installation Oracle OLAP 11g Sample Schema. But I can't install olaptrain sample schema. 
    How can I create OLAP cubes by shortest?
    http://www.oracle.com/technetwork/database/enterprise-edition/readme-098894.html

    I am trying to install the olaptrain schema in a 11g 11.2.0.1.0 db. I should be able to do it, because the readme file requires DB "11g (patch 11.1.0.7 or higher)". However, I get the following error message:
    Begin installation
    +... creating tablespaces+
    +... creating OLAPTRAIN user+
    +... creating OLAPTRAIN_INSTALL directory+
    +... importing dump file into OLAPTRAIN schema.+
    Import: Release 11.2.0.1.0 - Production on Mon Aug 9 16:35:44 2010
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Pr
    tion
    With the Partitioning, Oracle Label Security, OLAP, Data Mining,
    Oracle Database Vault and Real Application Testing options
    ORA-39002: invalid operation
    ORA-39070: Unable to open the log file.
    ORA-29283: invalid file operation
    ORA-06512: at "SYS.UTL_FILE", line 536
    ORA-29283: invalid file operation
    +... dropping OLAPTRAIN_INSTALL directory+
    Installation complete.
    SQL>
    I check the dm and dm_archive tablespaces, and they are both empty.
    Any help would be appreciated.
    Regards
    Juan Algaba Colera

  • Recommended Hardware configuration for VS

    I was looking at how to improve the performance of my development PC and checked out the VS Virtual Machines on Azure. I noticed two main points,
    1) they all seem to have a second drive that appears to be just for the swapfile/temporary files
    2) they seem to use Windows Server 2012 as the OS
    I also saw some recommendations for using SSD's for the main OS drive.
    What are the recommended/optimal configurations? Are the advantages/disadvantages documented anywhere?
    I can sort of see why (1) would help as my PC seems to be disk-bound but not sure whether (2) is significant or just to allow for SQL server development or something?
    Also if I where to use an SSD is there an easy way to move all the appdata, temporary and other frequently written areas to a second conventional HDD?

    Hello Terry,
    SSD will increate performace when reading/writing files so you will benefit from this if you use it and do some IO operations. However I think Visual Studio performance do not just rely on this, the build process of Visual Studio is also related to other
    of your hardware. It is hard to say how the performance will raise since it depends on so many things.
    I will recommend you do this: If you don't need to read or write a project too offten, you can keep it in HDD. If you always use one project and want to increate the performance, please set this project in SSD.
    For the appdata and temporary files, I cannot see them from VS settings, maybe you cannot modify them. But if you choose to install VS in SSD, I believe the performance will be better when read/write files.
    Best regards,
    Barry
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Olap cube build Error ORA-01403: no data found during DML "SYS.AWXML!R11_AGGREGATE_CONSISTENT"

    Can anyone help decoding this error and how to resolve this ?
    <ERROR>  <![CDATA[37162: ORA-37162: OLAP errorORA-01403: no data foundXOQ-01600: OLAP DML error while executing DML "SYS.AWXML!R11_AGGREGATE_CONSISTENT"ORA-06512: at "SYS.DBMS_CUBE", line 234ORA-06512: at "SYS.DBMS_CUBE", line 316ORA-06512: at line 1]]>></ERROR>
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, Automatic Storage Management, OLAP, Data Mining
    and Real Application Testing options
    Following is the cube_build_log rows when the error occurs
    BUILD_ID  STATUS  COMMAND  BUILD_OBJECT  BUILD_OBJECT_TYPE  OUTPUT_TXT
    9921  FAILED  BUILD    BUILD  <ERROR>  <![CDATA[XOQ-01707: Oracle job "A$9921_JOB$_501362" failed while executing slave build "ORSFHR_PRDBI.RESV_PACE_BI USING (LOAD PRUNE, SOLVE PARALLEL) AS OF SCN 10311831493648" with error "37162: ORA-37162: OLAP errorORA-01403: no data foundXOQ-01600: OLAP DML error while executing DML "SYS.AWXML!R11_AGGREGATE_CONSISTENT"ORA-06512: at "SYS.DBMS_CUBE", line 234ORA-06512: at "SYS.DBMS_CUBE", line 316ORA-06512: at line 1".]]>></ERROR>
    9921  COMPLETED  DETACH AW    BUILD 
    9921  STARTED  DETACH AW    BUILD 
    9921  COMPLETED  ROLLBACK    BUILD 
    9921  STARTED  ROLLBACK    BUILD 
    9921  COMPLETED  REATTACH AW RW WAIT    BUILD 
    9921  STARTED  REATTACH AW RW WAIT    BUILD 
    9921  FAILED  SLAVE  RESV_PACE_BI  CUBE  <ERROR>  <![CDATA[37162: ORA-37162: OLAP errorORA-01403: no data foundXOQ-01600: OLAP DML error while executing DML "SYS.AWXML!R11_AGGREGATE_CONSISTENT"ORA-06512: at "SYS.DBMS_CUBE", line 234ORA-06512: at "SYS.DBMS_CUBE", line 316ORA-06512: at line 1]]>></ERROR>
    9921  COMPLETED  DETACH AW  RESV_PACE_BI  CUBE 
    9921  FAILED  BUILD    BUILD  <ERROR>  <![CDATA[ORA-01403: no data foundXOQ-01600: OLAP DML error while executing DML "SYS.AWXML!R11_AGGREGATE_CONSISTENT"]]>></ERROR>
    9921  STARTED  DETACH AW  RESV_PACE_BI  CUBE 
    9921  FAILED  SOLVE PARALLEL  RESV_PACE_BI  CUBE  <SolveStatistics  IS_EMPTY="no"  CHANGED_VALUES="yes"  CHANGED_RELATIONS="no"/><ERROR>  <![CDATA[ORA-01403: no data foundXOQ-01600: OLAP DML error while executing DML "SYS.AWXML!R11_AGGREGATE_CONSISTENT"]]>></ERROR>
    9921  STARTED  SOLVE PARALLEL  RESV_PACE_BI  CUBE 
    9921  COMPLETED  LOAD  RESV_PACE_BI  CUBE  <CubeLoad  LOADED="4114188"  REJECTED="0"/>

    Please enter a service request with regard to this issue with Oracle Support.
    thanks,
    Ken

  • Project Server 2013 - OLAP Cube Build Issues

    Hi,
    I have a Project Server 2013 environment with SQL Server 2012 SP1. My cube built successfully and Report Authors group (including me) were accessing the templates and Data Connections from BI centre successfully.
    I re-built the cube and suddenly we all cannot access templates or data connection as it says 'Access denied'!
    Went to server hosting Analysis Services and checked the role ProjectServerViewOlapDataRole and didn't see any of the check box ticked for database permissions like it was when I initially built the cube and added Report Authors group and account
    that is used by Secure Store app. Membership has both account and group.
    Has anyone come across this issue?
    Thank you,
    SJ

    SJ,
    The access of the BI Center, templates and data connections are handled by SharePoint permissions and are usually inherited from PWA.
    The ProjectServerViewOlapDataRole simply provides the ability to view the data from the OLAP database. It sounds like the issue isn't with the OLAP but rather with PWA itself. An OLAP error would have only precluded you from retrieving data.
    Did someone modify the BI Center site permissions recently such as breaking inheritance?
    Treb Gatte, Project MVP |
    @tgatte | http://AboutMSProject.com

  • Recommended Hardware Configurations for EP5.0 SP6

    Hi all,
    What is the Recommended RAM and HARDDISC sizes for EP 5.0 SP6 for DEV, QTY and PROD servers.
    Help can be appreciated.
    Thanks and Regards,
    Sudhir

    Hi Sudhir,
       It is impossible to say with out knowing what you are going to be doing with the portal.  SAP doesn't recommend specific sizes.  I believe you will find the minimums in the installation guid or the technical infrastructure guide. 
    Personally I have a NW portal running on a windows laptop with two Gb of memory and a 40 gig drive.  Depending on the number of users, you could possibly run a portal on a windows box with two Gb of RAM.  For an It organization a dual processor 4Gb of Ram and 40 Gb drive sould work for a Development or Sandbox system unless you are expecting to be doing a lot of development on it.
    As far as sizing a QA and production system, that should be left to you working with your hardware provider.  they will want to know things like how many users will there be on the system.  # of concurrent, locations.
    Hopefully this give you a little something to start with.
    John

  • What's a decent hardware config for photo/video editing ?

    I apologize if this has been asked a million times before, but I'm a newcomer. I'm seriously thinking of making the switch from the dark side and buying my first Mac - the iMac 20". My main priorities will be photo editing of both jPEG and RAW images from my Nikon D50 and ( at last ) editing all those video tapes I made of the kids and family vacations over the last 15 years. I'm not a professional photographer or videographer, but I don't want my next machine to be underpowered and cause me to get frustrated while waiting for it to do the the work and therefore loose interest. I'm kind of a hardware geek, so my philosophy is slanted towards more is better - more disk space, more RAM, yada yada yada. From what I've read on forums and at dpreview website, a lot of people recommend more system RAM, so I'm condfident 2 GB of system RAM is a good place to start. However, I haven't seen much talk about 128 vs 256 MB of RAM on the video cards. That's a no brainer for gaming -my son has a PC with 256 MB on a Gforce 7600 and he's happy with the performance, at least until the next generation of games comes out. However,I don't want to confuse the needs of gaming with that of video/photo editing.
    Is there a benefit to upgrading the graphics card to 256 MB ? In the near future, I would like to buy additional software, like Aperture, Nikon's Capture NX, or Final Cut.
    I realize that upgrading the video memory by the user is not easy on the iMac, so any hardware items that I would like to upgrade should be done at time of ordering from Apple through the Apple Store.
    Thanks much, and looking forwarding to a new Mac soon.
    Bill

    If you plan to use final cut I would strongly suggest the graphics card upgrade to 256. 2 gigs of ram will be good. This varies from person to person but the processor upgrade could be usefull but it does cost a lot.

  • Care to share your server/hardware config for Spatial?

    Hi there - we're in the early stages of planning for a new spatial system (geocoding a large volume of existing address data), and we'd like to get a sense of what different organisations might be using by way of server architecture for a high-availability, large volume dataset GIS where your typical spatial query is a within-polygon or buffer-type search of geocoded address data, and you might need to support several thousand of these per hour... (for example).
    Would you share your platform/CPU/OS/memory/storage config comments as they relate to a dedicated Oracle Spatial 10g DBMS?
    Or perhaps just suggest what you had to do in addition to the baseline 10gDBMS requirements to give Spatial room to perform well...
    Any comments are accepted with humble thanks.
    Kind regards
    Dennis

    Dennis,
    Several thousand, say 3000 per hour is less than one a second. I'm not sure how much data you have (define large), but in general I'd try to keep all, or most of the data in RAM to get the best performance possible. A Linux box with say, 512GB of RAM is entirely resonable to use. If your seach patter is geographicly time sensitive (moves based on time of day), you could also effectively use partition prunning to reduce the amount of data to be searched.
    This is will give you a single, fast, db. For HA you need at least two, depending on what you define HA as.
    Bryan

Maybe you are looking for

  • Creative Cloud Packager unable to contact Adobe server for the last two days

    I know you had an outage on Wednesday as confirmed by Adobe support. I checked Creative Cloud Status and I see "Creative Cloud is operating normally". I have verified that our Internet connection is OK. I have been trying to build the cloud install b

  • ARCHIV_CREATE_FILE - creator name not filled

    Hi, I'm using FM ARCHIV_CREATE_FILE to upload a file in the generic object services (GOS). Works fine, except the creator name field is not filled when displaying the attachment list from GOS. Has anyone encountered this problem? Is there a better FM

  • MobileMe Gallery URL Not working

    All of my MobileMe Gallery URL's in iPhoto do not work and contain '(null)' as in: http://gallery.me.com(null)/username/100016 If I remove '(null)' it works fine. Anyone else have this problem? Thanks!

  • Third party issue - PR from SO

    Hello, Our client wants the drop ship scenario as follow: Create sales order which will trigger PR ceation. But the thing is when you create SO, system should perfom the availability check and based on results should create PR. (e.g. if material is a

  • Cookie problems

    I know that there are problems inherent within the Portal itself which cause problems creating a cookie when that code is executed from thing a Portal context. To circumvent that I have a page external to the Portal which creates a cookie. The values