Adding commentary in ASO Cube and aggregating it to TOP Level

Hi Gurus,
I have one peculiar problem. We are adding commentary in BSO Planning cube now i have couple of problem related to it.
a) These commentary needs to be pushed in ASO cube (Reporting) which need are entered in Lower Level.
b) At top level these commentaried need to be Aggregated or rather Concatenated.
eg.
ProfitCentre has two child P1 and P2 and user enter commantary in BSO for P1 and P2 as "Market Risk Deviation" and "Standard Output"
then in the HSPgetval Smartview report the conent of report will look like:
Profit Centre          Market Risk Deviation + Standard Output
P1                        Market Risk Deviation
P2                        Standard Output.
Any thoughts/ Suggestions/ Input/ Ways to achieve so
Thanks
Anubhav

Apart from what Glenn suggested
Not out of box, you are looking at a JAVA API + SQL based solution here
Here are my thoughts
Either use a Select query and get the Text values and IDs from the tables HSP_CELL_TEXT (or HSP_TEXT_CELL) table
Create a Java API, which can import a TextList in ASO cube, ID is going to be what you get from the table
Load the data to ASO from Planning
Now for the aggregation/concatenation part, you'll have to Add those as again Smart List, This can be done by looking at HSP_CELL_TEXT (or HSP_TEXT_CELL) table, there is an ID associated with each text, get the id associated
So for example Market Risk Deviation is 1 and Standard Output Deivation is 2, then you should add Market Risk Deviation + Standard Output as 3, however you'll have to make sure that there is no entry from Planning for 3
It is complicated
Regards
Celvin Kattookaran

Similar Messages

  • ASO cubes and dimensional security, how???

    Hi,
    I'm getting my feet wet with ASO, and I'm wondering how to implement security on dimensions in a way similar to a Planning application (which is BSO only, I understand). When administering a planning application, I can apply security to specific members in dimensions, then refresh security etc, but what can I do for an ASO cube? I'm assuming that ASO cubes can be created/administered only through EAS.
    I need to be able to control what groups/users can access certain members of dimensions of ASO cube, when they run reports on the cube.
    Thanks
    Mike

    Hi Mike,
    User management and security is the same either for ASO or BSO application from Essbase point of view.
    You can either use Native security mode or security through shared services based on the configurations done.
    Check out this link from dbag for security,
    [http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_dbag/pt06.htm]

  • OBIEE BI Answers: Wrong Aggregation Measures on top level of hierarchy

    Hi to all,
    I have following problem. I hope to be clear in my English because it's a bit complicated to explain.
    I have following fact table:
    Drug Id Ordered Quantity
    1 9
    2 4
    1 3
    2 2
    and following Drug Table:
    Drug Brand Id Brand Description Drug Active Ingredient Id Drug Active Ingredient Description
    1 Aulin 1 Nimesulide
    2 Asprina 2 Acetilsalicilico
    In AWM i've defined a Drug Dimension based on following hierarchy: Drug Active Ingredient (parent) - Drug Brand Description (leaf) mapped as:
    Drug Active Ingredient = Drug Active Ingredient Id of my Drug Table (LONG DESCRIPTION Attribute=Drug Active Ingredient Description)
    Drug Brand Description = Drug Brand Id of my Drug Table (LONG DESCRIPTION Attribute = Drug Brand Description)
    Indeed in my cube I've mapped leaf level Drug Brand Description = Drug Id of my fact table. In AWM Drug Dimension is mapped as Sum Aggregation Operator
    If I select on Answers Drug Active Ingredient (parent of my hierarchy) and Ordered Quantity I see following result
    Drug Active Ingredient Description Ordered Quantity
    Acetilsalicilico 24
    Nimesulide 12
    indeed of correct values
    Drug Active Ingredient Description Ordered Quantity
    Acetilsalicilico 12
    Nimesulide 6
    EXACTLY the double!!!!!!! But if I drill down Drug Active Ingredient Description Acetilsalicilico I see correctly:
    Drug Active Ingredient Description Drug Brand Description Ordered Quantity
    Acetilsalicilico
    - Aspirina 12
    Total 12
    Wrong Aggregation is only on top level of hierarchy. Aggregation on lower level of hierarchy is correct. Maybe Answers sum also Total Row????? Why?????
    I'm frustrated. I beg your help, please!!!!!!!!
    Giancarlo

    Hi,
    in NQSConfig.ini I can't find Cache Section. I post all file. Tell me what I must change. I know your patient is quite at limit!!!!!!! But I'm a new user of OBIEE.
    # NQSConfig.INI
    # Copyright (c) 1997-2006 Oracle Corporation, All rights reserved
    # INI file parser rules are:
    # If values are in literals, digits or _, they can be
    # given as such. If values contain characters other than
    # literals, digits or _, values must be given in quotes.
    # Repository Section
    # Repositories are defined as logical repository name - file name
    # pairs. ODBC drivers use logical repository name defined in this
    # section.
    # All repositories must reside in OracleBI\server\Repository
    # directory, where OracleBI is the directory in which the Oracle BI
    # Server software is installed.
    [ REPOSITORY ]
    #Star     =     samplesales.rpd, DEFAULT;
    Star = Step3.rpd, DEFAULT;
    # Query Result Cache Section
    [ CACHE ]
    ENABLE     =     YES;
    // A comma separated list of <directory maxSize> pair(s)
    // e.g. DATA_STORAGE_PATHS = "d:\OracleBIData\nQSCache" 500 MB;
    DATA_STORAGE_PATHS     =     "C:\OracleBIData\cache" 500 MB;
    MAX_ROWS_PER_CACHE_ENTRY = 100000; // 0 is unlimited size
    MAX_CACHE_ENTRY_SIZE = 1 MB;
    MAX_CACHE_ENTRIES = 1000;
    POPULATE_AGGREGATE_ROLLUP_HITS = NO;
    USE_ADVANCED_HIT_DETECTION = NO;
    MAX_SUBEXPR_SEARCH_DEPTH = 7;
    // Cluster-aware cache
    // GLOBAL_CACHE_STORAGE_PATH = "<directory name>" SIZE;
    // MAX_GLOBAL_CACHE_ENTRIES = 1000;
    // CACHE_POLL_SECONDS = 300;
    // CLUSTER_AWARE_CACHE_LOGGING = NO;
    # General Section
    # Contains general server default parameters, including localization
    # and internationalization, temporary space and memory allocation,
    # and other default parameters used to determine how data is returned
    # from the server to a client.
    [ GENERAL ]
    // Localization/Internationalization parameters.
    LOCALE     =     "Italian";
    SORT_ORDER_LOCALE     =     "Italian";
    SORT_TYPE = "binary";
    // Case sensitivity should be set to match the remote
    // target database.
    CASE_SENSITIVE_CHARACTER_COMPARISON = OFF ;
    // SQLServer65 sorts nulls first, whereas Oracle sorts
    // nulls last. This ini file property should conform to
    // that of the remote target database, if there is a
    // single remote database. Otherwise, choose the order
    // that matches the predominant database (i.e. on the
    // basis of data volume, frequency of access, sort
    // performance, network bandwidth).
    NULL_VALUES_SORT_FIRST = OFF;
    DATE_TIME_DISPLAY_FORMAT = "yyyy/mm/dd hh:mi:ss" ;
    DATE_DISPLAY_FORMAT = "yyyy/mm/dd" ;
    TIME_DISPLAY_FORMAT = "hh:mi:ss" ;
    // Temporary space, memory, and resource allocation
    // parameters.
    // You may use KB, MB for memory size.
    WORK_DIRECTORY_PATHS     =     "C:\OracleBIData\tmp";
    SORT_MEMORY_SIZE = 4 MB ;
    SORT_BUFFER_INCREMENT_SIZE = 256 KB ;
    VIRTUAL_TABLE_PAGE_SIZE = 128 KB ;
    // Analytics Server will return all month and day names as three
    // letter abbreviations (e.g., "Jan", "Feb", "Sat", "Sun").
    // To use complete names, set the following values to YES.
    USE_LONG_MONTH_NAMES = NO;
    USE_LONG_DAY_NAMES = NO;
    UPPERCASE_USERNAME_FOR_INITBLOCK = NO ; // default is no
    // Aggregate Persistence defaults
    // The prefix must be between 1 and 8 characters long
    // and should not have any special characters ('_' is allowed).
    AGGREGATE_PREFIX = "SA_" ;
    # Security Section
    # Legal value for DEFAULT_PRIVILEGES are:
    # NONE READ
    [ SECURITY ]
    DEFAULT_PRIVILEGES = READ;
    PROJECT_INACCESSIBLE_COLUMN_AS_NULL     =     NO;
    MINIMUM_PASSWORD_LENGTH     =     0;
    #IGNORE_LDAP_PWD_EXPIRY_WARNING = NO; // default is no.
    #SSL=NO;
    #SSL_CERTIFICATE_FILE="servercert.pem";
    #SSL_PRIVATE_KEY_FILE="serverkey.pem";
    #SSL_PK_PASSPHRASE_FILE="serverpwd.txt";
    #SSL_PK_PASSPHRASE_PROGRAM="sitepwd.exe";
    #SSL_VERIFY_PEER=NO;
    #SSL_CA_CERTIFICATE_DIR="CACertDIR";
    #SSL_CA_CERTIFICATE_FILE="CACertFile";
    #SSL_TRUSTED_PEER_DNS="";
    #SSL_CERT_VERIFICATION_DEPTH=9;
    #SSL_CIPHER_LIST="";
    # There are 3 types of authentication. The default is NQS
    # You can select only one of them
    #----- 1 -----
    #AUTHENTICATION_TYPE = NQS; // optional and default
    #----- 2 -----
    #AUTHENTICATION_TYPE = DATABASE;
    # [ DATABASE ]
    # DATABASE = "some_data_base";
    #----- 3 -----
    #AUTHENTICATION_TYPE = BYPASS_NQS;
    # Server Section
    [ SERVER ]
    SERVER_NAME = Oracle_BI_Server ;
    READ_ONLY_MODE = NO;     // default is "NO". That is, repositories can be edited online.
    MAX_SESSION_LIMIT = 2000 ;
    MAX_REQUEST_PER_SESSION_LIMIT = 500 ;
    SERVER_THREAD_RANGE = 40-100;
    SERVER_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    DB_GATEWAY_THREAD_RANGE = 40-200;
    DB_GATEWAY_THREAD_STACK_SIZE = 0; // default is 256 KB, 0 for default
    MAX_EXPANDED_SUBQUERY_PREDICATES = 8192; // default is 8192
    MAX_QUERY_PLAN_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_INFO_CACHE_ENTRIES = 1024; // default is 1024
    MAX_DRILLDOWN_QUERY_CACHE_ENTRIES = 1024; // default is 1024
    INIT_BLOCK_CACHE_ENTRIES = 20; // default is 20
    CLIENT_MGMT_THREADS_MAX = 5; // default is 5
    # The port number specified with RPC_SERVICE_OR_PORT will NOT be considered if
    # a port number is specified in SERVER_HOSTNAME_OR_IP_ADDRESSES.
    RPC_SERVICE_OR_PORT = 9703; // default is 9703
    # If port is not specified with a host name or IP in the following option, the port
    # number specified at RPC_SERVICE_OR_PORT will be considered.
    # When port number is specified, it will override the one specified with
    # RPC_SERVICE_OR_PORT.
    SERVER_HOSTNAME_OR_IP_ADDRESSES = "ALLNICS"; # Example: "hostname" or "hostname":port
    # or "IP1","IP2":port or
    # "hostname":port,"IP":port2.
    # Note: When this option is active,
    # CLUSTER_PARTICIPANT should be set to NO.
    ENABLE_DB_HINTS = YES; // default is yes
    PREVENT_DIVIDE_BY_ZERO = YES;
    CLUSTER_PARTICIPANT = NO; # If this is set to "YES", comment out
    # SERVER_HOSTNAME_OR_IP_ADDRESSES. No specific NIC support
    # for the cluster participant yet.
    // Following required if CLUSTER_PARTICIPANT = YES
    #REPOSITORY_PUBLISHING_DIRECTORY = "<dirname>";
    #REQUIRE_PUBLISHING_DIRECTORY = YES; // Don't join cluster if directory not accessible
    DISCONNECTED = NO;
    AUTOMATIC_RESTART = YES;
    # Dynamic Library Section
    # The dynamic libraries specified in this section
    # are categorized by the CLI they support.
    [ DB_DYNAMIC_LIBRARY ]
    ODBC200 = nqsdbgatewayodbc;
    ODBC350 = nqsdbgatewayodbc35;
    OCI7 = nqsdbgatewayoci7;
    OCI8 = nqsdbgatewayoci8;
    OCI8i = nqsdbgatewayoci8i;
    OCI10g = nqsdbgatewayoci10g;
    DB2CLI = nqsdbgatewaydb2cli;
    DB2CLI35 = nqsdbgatewaydb2cli35;
    NQSXML = nqsdbgatewayxml;
    XMLA = nqsdbgatewayxmla;
    ESSBASE = nqsdbgatewayessbasecapi;
    # User Log Section
    # The user log NQQuery.log is kept in the server\log directory. It logs
    # activity about queries when enabled for a user. Entries can be
    # viewed using a text editor or the nQLogViewer executable.
    [ USER_LOG ]
    USER_LOG_FILE_SIZE = 10 MB; // default size
    CODE_PAGE = "UTF8"; // ANSI, UTF8, 1252, etc.
    # Usage Tracking Section
    # Collect usage statistics on each logical query submitted to the
    # server.
    [ USAGE_TRACKING ]
    ENABLE = NO;
    //==============================================================================
    // Parameters used for writing data to a flat file (i.e. DIRECT_INSERT = NO).
    STORAGE_DIRECTORY = "<full directory path>";
    CHECKPOINT_INTERVAL_MINUTES = 5;
    FILE_ROLLOVER_INTERVAL_MINUTES = 30;
    CODE_PAGE = "ANSI"; // ANSI, UTF8, 1252, etc.
    //==============================================================================
    DIRECT_INSERT = YES;
    //==============================================================================
    // Parameters used for inserting data into a table (i.e. DIRECT_INSERT = YES).
    PHYSICAL_TABLE_NAME = "<Database>"."<Catalog>"."<Schema>"."<Table>" ; // Or "<Database>"."<Schema>"."<Table>" ;
    CONNECTION_POOL = "<Database>"."<Connection Pool>" ;
    BUFFER_SIZE = 10 MB ;
    BUFFER_TIME_LIMIT_SECONDS = 5 ;
    NUM_INSERT_THREADS = 5 ;
    MAX_INSERTS_PER_TRANSACTION = 1 ;
    //==============================================================================
    # Query Optimization Flags
    [ OPTIMIZATION_FLAGS ]
    STRONG_DATETIME_TYPE_CHECKING = ON ;
    # CubeViews Section
    [ CUBE_VIEWS ]
    DISTINCT_COUNT_SUPPORTED = NO ;
    STATISTICAL_FUNCTIONS_SUPPORTED = NO ;
    USE_SCHEMA_NAME = YES ;
    USE_SCHEMA_NAME_FROM_RPD = YES ;
    DEFAULT_SCHEMA_NAME = "ORACLE";
    CUBE_VIEWS_SCHEMA_NAME = "ORACLE";
    LOG_FAILURES = YES ;
    LOG_SUCCESS = NO ;
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\CubeViews.Log";
    # MDX Member Name Cache Section
    # Cache subsystem for mapping between unique name and caption of
    # members for all SAP/BW cubes in the repository.
    [ MDX_MEMBER_CACHE ]
    // The entry to indicate if the feature is enabled or not, by default it is NO since this only applies to SAP/BW cubes
    ENABLE = NO ;
    // The path to the location where cache will be persisted, only applied to a single location,
    // the number at the end indicates the capacity of the storage. When the feature is enabled,
    // administrator needs to replace the "<full directory path>" with a valid path,
    // e.g. DATA_STORAGE_PATH = "C:\OracleBI\server\Data\Temp\Cache" 500 MB ;
    DATA_STORAGE_PATH     =     "C:\OracleBIData\cache" 500 MB;
    // Maximum disk space allowed for each user;
    MAX_SIZE_PER_USER = 100 MB ;
    // Maximum number of members in a level will be able to be persisted to disk
    MAX_MEMBER_PER_LEVEL = 1000 ;
    // Maximum size for each individual cache entry size
    MAX_CACHE_SIZE = 100 MB ;
    # Oracle Dimension Export Section
    [ ORA_DIM_EXPORT ]
    USE_SCHEMA_NAME_FROM_RPD = YES ; # NO
    DEFAULT_SCHEMA_NAME = "ORACLE";
    ORA_DIM_SCHEMA_NAME = "ORACLE";
    LOGGING = ON ; # OFF, DEBUG
    LOG_FILE_NAME     =     "C:\OracleBI\server\Log\OraDimExp.Log";

  • ASO Cube, increase aggregation or improve retrieval performance

    We've been using the essbase cube to create report using OBIEE.
    When we use level-0 member filter, it takes quit a long time to get the results.
    Any idea to improve the performance?
    Is there anyway that I can improve the number of aggregation occurs at a time? Thank you.

    What doesn't make sense to me is you don't need aggregations on level zero members as that is where the data is stored. I'm guessing oyu mean level sero members of one dimension and higher level members of other dimensions. Are those other dimensions dynamic or stored? Do you have a lot of calculations going on that are being retrieved? Have you materalized aggregations on the cube?

  • Cube and aggregation

    Hi All,
    I have created a cube using OWB wizard. The aggregation tab applies to all measures of the cube, where as i want to apply aggregation at an individual measure. Any idea.
    Jak.
    Message was edited by:
    jakdwh

    Hi,
    i have tried to assign differnet aggregation roles to differing measures within the OWB Cude editor without success.
    I created a simple dimension with two measures one requiring a sum aggregation and one requiring an average aggregation. I can set them ok and vailidate ok but when I close the editor and re-open theey have bothe been set to the overall cube aggregation policy. Even if this is set to NOAGG they all become NOAGG. It seems there must be a flaw in the implementation of aggregation within OWB.
    Anyone found a resolution or work-around? I came accross this wjhen trying to utilise degenerate dimension keys in the cube definition....
    Robbie

  • MDX query with DMV to get all cubes and aggregation row count on SSAS engine

    Hi All,
    How can I get all cube names  on a SSAS engine server and count of number of aggregation rows in each cube ?
    I got a DMV where it shows all catalogs names and description but where can I found aggregation row count of each cube.
    Please let me know, thanks in advance.
    Maruthi...

    Hi Maruthi,
    Please check below link, hope this will help you.
    SSAS 2008 CTP6 – new DMV $SYSTEM. DISCOVER_ OBJECT_ ACTIVITY
    lists the memory usage and CPU time for each object i.e. cube, dimension, cache, measure, partition, etc. They also show which aggregations were hit or missed, how many times these objects were read, and how many rows were returned by them:
    Discover_object_memory_usage and discover_object_activity
    select * from $system.discover_object_memory_usage
    select * from $system.discover_object_activity
    Thanks
    Suhas
    Mark as Answer if this resolves your problem or "Vote as Helpful" if you find it helpful.
    My Blog
    Follow @SuhasKudekar

  • Show child levels when the top level is forced to be null - Avoid aggregations on the top level

    Hi everybody,
    it was difficult to select a title for my question.
    Let´s say I have a geographical hierarchy with Region --> Country --> District --> Store levels
    I want to avoid the aggregations in local currency at the Region Level because that makes no sense.
    I scoped the Net Sales measure like this:
    SCOPE ([Measures].[Net Sales], [Fx Rate].[Fx Rate].[Local Currency], [Stores].[Store].Members);
    this = SUM([Fx Rate].[Fx Rate].&[1], [Measures].[Net Sales LC]); 
    END SCOPE;
    SCOPE ([Measures].[Net Sales], [Fx Rate].[Fx Rate].[Local Currency], [Stores].[Region].Members);
    this = null;
    END SCOPE;
    The scopes are working but I have a visualization problem. When I drag and drop the geo hierarchy in the pivot table nothing is shown because the upper level (Region) has only empty cells (this=null;). If I change the Fx Rate type to a reference currency,
    then the regions are shown and I can expand the lower levels, change the filter again to local currency and the values are back, but this is not the best approach.
    Any ideas about how to tackle that? Any comment would be appreciated
    Kind Regards

    I blogged about this scenario and a solution here:
    http://www.artisconsulting.com/blogs/greggalloway/Lists/Posts/Post.aspx?ID=24
    http://artisconsulting.com/Blogs/GregGalloway

  • Trasformations and DTP are decactivating after adding some NAV in cube

    Hello Experts
       Trasformations and DTP are decactivating after adding some NAV in cube and activating it.
       I knew that it is normal but activating all the DTP's and Transformations is painfull work, would like to just know whether there is any programme to activate the dependent objects!!!

    yes, thats right. A stupid job.
    in my last project I created 3 multicubes, 14 infocubes and 23 dso's.
    -> 1 change in the data modell and you must reactivate 20 objects.
    I wrote a ABAP program to activate
    - datasources
    - transformations
    - DTP's
    - update roules and
    - multicubes.
    It's very nive. So simple now, to make a change.
    I select
    - a object or
    - or an infoarea
    - and the program search all objects in the dataflow.
    I have two modes
    - to display all objects
    - and activate the inactive objects.
    They are many tasks, where I think a tool can help me to increase my development performance. But I don't have enough time.
    Sven

  • Adding new characteristics to cube with data

    Hi Gurus,
    i need to add some characteristics to a cube already in production, this cube is customized version of material stocks/movements cube (0ic_c03).
    i am looking for a way to add the new characteristics without having to do reinitialisation (opening stock,...), i am thinking about a loopback process but how do i manage to get the new characteristics populated for the historical data?
    Thank you.

    Hi,
    If you want to load historial data, you must take ECC down time and re-intialization is required.Becasue you are adding new object in Cube and for that you need to change the Update rules, then need to load historical data, so without down time and reinitialization it is not possible.
    Check like below.
    You have data in PSA, so try to delete data from Cube and then load from PSA. Because you may write code in Update rules on;y I think. So in that case, it may work.
    Thanks
    Reddy

  • SSaudit for ASO cubes in 11.1.1.3?

    Hello Gurus -
    I was wondering if anyone has been able to get the SSaudit feature working on ASO cube on version 11.1.1.3 to get the .atx and .alg file generated?
    We tried but it doesn't seem to work for us. Also, the "send" activity itself doesn't seem to get captured in either apps log or essbase logs.
    I was wondering if there is a workaround or especial setting we need to do for ASO.
    Pls, throw some light if anyone know anything on it.
    Regards

    SSAudit is not supported for ASO cubes and unfortunatly neither is transaction logging.
    If you need to do something like this I suggest using Dodeca which can save sends in a relational table

  • Adding fields to 0IC_C03 cube

    Hi friends,
    I have to add some fields into 0IC_C03 cube. The fields are reson for movement ,special stock indicator,sales order etc. They are cmg in the datasource 2lis_03_bf. I did add them and load the data , but the data was not coming properly. Is there any other method by which i can make use of these fields without adding them in the cube and make a multiprovider on that. I need to have the customer field also. Will this data come if i make a multiprovider on top of this . Can i make a generic datasource from the table which is giving these fields and make a DSO with these fields and make a multiprovider on it.
    Will the key date concept work on this .
    I would appreciate ur help.
    Thanks,
    Kapil

    Dear Kapil,
    Pleae go through the link provided bleow hope this wil help full.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Cheers,
    VEERU.

  • ASO cube

    Hi
    we are trying to build a cube . Cube is ASO cube and want to build cube wich has functionality of planning .But , we saw that planning creates a BSO cube and has fucntion for allocation. I want ot know how to do this in ASO cube
    Jim

    Hi Jim,
    1. Firstly, I appreciate you work of incorporating the whole functionality of P and B in an ASO cube.
    2. I would like to know your version .In version 11.1.2 , we do have allocation even for ASo cube. So, if its the new version, you have the pre defined function for you( "execute allocation")
    Do update on the progress to the forum
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • ASO cube structure

    Hi everyone,
    I'm aware of the main differences between aggregate storage and block storage in terms of performance and features (http://www.datawarehousingsupport.com/2010/03/differences-between-aggregate-and-block.html). However I don't know anything about how ASO cubes are structured on a bit-level and the algorithms used to peform fast search (e.g. hashing, trie ...) on them.
    Someone suggested me to read the Dan Pressman's presentation on www.odtug.com. It's a good beginning but does anynone have further documentation?
    Thanks,

    Probably you are running into this one:
    Bug 14469960 - DATA REPLICATION FROM BSO TO ASO CRASHES TARGET DATABASE
    Supposedly they fixed this issue in 11.1.2.2 but we are getting the problem on a 11.1.2.2 system.

  • MDX query performance on ASO cube with dynamic members

    We have an ASO cube and we are using MDX queries to extract data from that cube. We are doing some performance testing on the MDX data extract.
    Recently we made around 15-20 account dimension members dynamic in the ASO cube, and it is taking around 1 and a half hour for the query to run on an empty cube. Earlier the query was running in 1 minute on the empty cube when there were no dynamic members in the cube.
    Am not clear why it takes so much time to extract data from MDX on an empty cube when there is nothing to extract. The performance has also degraded while extracting data on the cube with data in it.
    Does dynamic members in the outline affect the MDX performance? Is there a way to exclude dynamic members from the MDX extract?
    I appreciate any insights on this issue.

    I guess it depends on what the formulas of those members in the dynamic hierarchy are doing.
    As an extreme example, I can write a member formula to count every unique member combination in the cube and assign it to multiple members, regardless if I have any data in the database or not, that function is going to resolve itself when you query it and it is going to take a lot of time. You are probably somewhere in between there and a simple function that doesn't require any over head. So without seeing the MDX it is hard to say what about it might be causing an issue.
    As far as excluding members there are various function in MDX to narrow down the set you are querying
    Filter(), Contains(), Except(), Is(), Subset(), UDA(), etc.
    Keep in mind you did not make members dynamic, you made a hierarchy dynamic, that is not the same thing and it does impact the way Essbase internally optimizes the database based on Stored vs dynamic hierarchies. So that alone can have an impact as well.

  • ASO Cube Does not overwrite the data

    <p>Hi,</p><p> </p><p>I have a ASO Cube and I am loading a data using the followingtext file. While loading the text file I am using the rule filewhere I said overwrite the existing values. But it is notoverwriting the Values. I am using 7.1.5</p><p>I will appreciate your suggestions.</p><p> </p><p>Thanks</p><p>Anky</p><p> </p><p>Text File</p><p> </p><p> </p><table><tr height="17" style=" height: 12.75pt;"><td height="17" style=" height: 12.75pt; width: 48pt;" width="64">Jan</td><td style=" width: 48pt;" width="64">test</td><td style=" width: 48pt;" width="64">Pr1</td><td x:num="" align="right" style=" width: 48pt;" width="64">1000</td></tr><tr height="17" style=" height: 12.75pt;"><td height="17" style=" height: 12.75pt;">Jan</td><td>test</td><td>Pr1</td><td x:num="" align="right">2000</td></tr><tr height="17" style=" height: 12.75pt;"><td height="17" style=" height: 12.75pt;">Jan</td><td>test</td><td>Pr1</td><td x:num="" align="right">100000</td></tr></table><p> </p>

    <p>Hi Ankyjay.</p><p> </p><p>                 Ifound this in DBAG page 1087 this might help.</p><p> </p><p>When you take advantage of the aggregate storage data loadbuffer, Analytic Services sorts and works with the values after alldata sources have been read. <span style=" text-decoration: underline;"><b>If multiple records are encountered for any specificdata cell, the values are accumulated. Analytic Services thenstores the accumulated values.</b></span></p><p> </p><p> </p>

Maybe you are looking for

  • Front end printing.

    Hi All, In frontend printing in windows environment. Can we use exact device type(for EX:HP LaserJet 4300 dtns) instead of swin device type.  The reason being we are not able to correctly print a invoice(logo not getting printed  properly) Thanks Vij

  • Adobe Document Service (ADS) ssl test - How to ?

    Hello, I've set up ADS with Basic Authentication using - and when i test using the "test" function ---> rpdata - > send ---> ADSUser + Password  - it works fine. Even the practical tests of developing PDF docs and displaying them in the portal are fi

  • Serial Code Issue

    I purchased a full version of Adobe Photoshop CS4 Extended about a year and a half ago. I loaded it on my laptop just fine. However, following a HD crash, it's asking me for an upgrade check. I used the file just fine previously. How do I fix this is

  • Accpeting invites received on iphone

    I have moved my life over to Apple leaving my Blackberry and Microsoft behind, knowing that their would be a few hiccups, and having to use lots of alternative apps/ patches I have had a reasonable experinece. However, I can't see why now that I use

  • Please help. Been on the phone twice now and they ...

    So I have had a steady 8mb for the past 5-6 years then all of a sudden I'm getting less than 2 for the past two weeks. I think it has something to do with the IP Profile being stuck at 2 because it says I am able to download 8mb. Hopefully those scre