Disco 4 OLAP dimension limit question

Hello!
Small question about Disco. Is it possible to do a limit on dimension simmilar to this:
limit dim1 to HIERARCHY DEPTH 4 SKIP 3 'ONE_TOP_MEMBER'
limit dim1 keep top 500 BASEDON SALES
The problem is that in Disco we can not limit Current selection if we are doing top/bottom query. It allows to select only full level of hierarchy but we need only several members of that level. Are there any other aproach?
Thank you in advance!
Regards,
Kirill Boyko

You're right.. It was interesting :-)
I couldnt get it to work using just OLAP DML commands..
++++++++
"SUBCAT level
limit articles to articles_levelrel eq 'SUBCAT'
"NOTE: DOES NOT WORK for EACH subcat. Adds 20% children of first sub-category in status
limit articles add limit(limit(articles to children using articles_parentrel articles(articles articles)) keep top npcent percentof total(sales, articles))
sort articles hierarchy articles_parentrel
rpr down articles articles_parentrel heading 'SALES' sales heading 'Pct_Parent' sales(articles articles)/sales(articles articles_parentrel(articles articles)) heading 'TopNpcent_Children' limit(limit(articles to children using articles_parentrel articles(articles articles)) keep TOP 20 PERCENTOF total(sales, articles))
++++++++
You can create a DML program and use the Limit function with Top N PERCENTOF based on expression to perform the needful.
****** OLAP DML program temp1 ******
arg _npcent integer
vrb npcent integer
vrb vset1 valueset articles
if _npcent eq na
then npcent = 20
else npcent = _npcent
limit vset1 to na
limit articles to articles_levelrel eq 'SUBCAT'
tempstat articles
DO
FOR articles
DO
limit vset1 add limit(limit(articles to children using articles_parentrel articles(articles articles)) keep top npcent percentof total(sales, articles))
DOEND
DOEND
limit articles add vset1
sort articles hierarchy articles_parentrel
temp1 20
rpr down articles articles_parentrel Heading 'SALES' sales Heading 'Pct_Parent' sales(articles articles)/sales(articles articles_parentrel(articles articles)) Heading 'TopNpcent_Children' limit(limit(articles to children using articles_parentrel articles(articles articles)) keep TOP 20 PERCENTOF total(sales, articles))

Similar Messages

  • OLAP Dimension Attributes in a BI Beans Crosstab

    Hello,
    How can I show my custom OLAP dimension attributes in the BI Beans Crosstab?
    Can I compare this attributes with the measures?
    I have a cube with three dimensions and I have defined some limit amounts which are valid only for one of the dimensions. Additionally they are defined only for aggregated levels in this dimension.
    I was wondering whether I could define these limit amounts as dimension attributes and use them to compare with the measures.
    Is there any other possibility?
    Maybe I can create my custom column in the Crosstab?
    Thanks in advance,
    Michal

    I would recommend using the built-in security feature of the analytic workspace. There is a command called PERMIT that can be applied to both dimensions and cubes and controls user access to dimension members.
    There is a document on the OLAP home page that explains how to secure and AW using the permit command
    http://www.oracle.com/technology/products/bi/olap/olap10g_applying_aw_security.doc
    Hope this helps
    Keith Laker
    Oracle EMEA Consulting
    BI Blog: http://oraclebi.blogspot.com/
    DM Blog: http://oracledmt.blogspot.com/
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Samples: http://www.oracle.com/technology/products/bi/samples/

  • Disco Plus OLAP - Client Hardware Requirements

    Hi,
    We've just started experimenting with Disco Plus for OLAP and have built a MOLAP cube. In there are 51,000 customers, 12 order types, 80,000 activities and about 1000 dates, with one measure - Revenue. If I have my calculations correct, we're knocking on 81.6 Billion records just for that alone.
    With this sheer weight of data involved, are there any hardware recommendations any forum member might like to suggest for a desktop client that will access Disco Plus OLAP to create and manage reports?
    At the moment I'm running on a Core Duo laptop (2Ghz) with 1.25GB of memory - is this sufficient, or should I consider moving to an actual desktop machine rather than a laptop? What about memory? Should I basically seek the best available machine?
    Thanks,
    Andy

    With that amount of detail, I'd suspect network speed and server memory are going to be more of an issue than client speed. If you are planning on running million row result sets in Relational or OLAP, you might be waiting awhile.

  • Use short description on OLAP dimension member in disco for OLAP

    Hi All,
    I've noticed in the spreadsheet add-in you can decide under "query options" whether to use the Long or Short Description when displaying a dimension member. How do you do the same in discoverer PLUS for OLAP ? I cant find it anwhere.
    Cheers,
    Brandon

    This feature is not available in Disco :-( Logged as an enhancement request.
    Keith

  • LIMIT question

    Hello!
    I've got one interesting question on OLAP DML. Assume you have dimension ARTICLES and one hierarchy with levels CATEGORY, SUB-CATEGORY, ITEM and one cube with variable SALES measured by this dimension.
    question: how to limit ARTICLES dimension to all top 20% items in EACH sub-category based on sales?
    I have tried to prepare a DML program with loop over ARTICLES dimension using TEMPSTAT PUSHLEVEL and POPLEVEL, but it seems that TEMPSTAT kills all PUSH in such loop.
    Does anybody knows how to do it?
    Thank you in advance.

    You're right.. It was interesting :-)
    I couldnt get it to work using just OLAP DML commands..
    ++++++++
    "SUBCAT level
    limit articles to articles_levelrel eq 'SUBCAT'
    "NOTE: DOES NOT WORK for EACH subcat. Adds 20% children of first sub-category in status
    limit articles add limit(limit(articles to children using articles_parentrel articles(articles articles)) keep top npcent percentof total(sales, articles))
    sort articles hierarchy articles_parentrel
    rpr down articles articles_parentrel heading 'SALES' sales heading 'Pct_Parent' sales(articles articles)/sales(articles articles_parentrel(articles articles)) heading 'TopNpcent_Children' limit(limit(articles to children using articles_parentrel articles(articles articles)) keep TOP 20 PERCENTOF total(sales, articles))
    ++++++++
    You can create a DML program and use the Limit function with Top N PERCENTOF based on expression to perform the needful.
    ****** OLAP DML program temp1 ******
    arg _npcent integer
    vrb npcent integer
    vrb vset1 valueset articles
    if _npcent eq na
    then npcent = 20
    else npcent = _npcent
    limit vset1 to na
    limit articles to articles_levelrel eq 'SUBCAT'
    tempstat articles
    DO
    FOR articles
    DO
    limit vset1 add limit(limit(articles to children using articles_parentrel articles(articles articles)) keep top npcent percentof total(sales, articles))
    DOEND
    DOEND
    limit articles add vset1
    sort articles hierarchy articles_parentrel
    temp1 20
    rpr down articles articles_parentrel Heading 'SALES' sales Heading 'Pct_Parent' sales(articles articles)/sales(articles articles_parentrel(articles articles)) Heading 'TopNpcent_Children' limit(limit(articles to children using articles_parentrel articles(articles articles)) keep TOP 20 PERCENTOF total(sales, articles))

  • OLAP kinda dumb question....

    Ok, I know its no longer required in OLAP 10g to create a CWM cube first, but if I have a relational cube with associated CWM entries, is it even possible to create an AW based on it any more? Or do I have to recreate all the dimensions, cubes, measures, etc.?
    p.s. there used to be a wizard to do this, but I don't see it any more.
    Thx,
    Scott

    Hi,
    I did this recently in a project. If its the the 100% right way to do it I'm not sure but it worked for me. I did it only for the dimensions however as I wanted to be 100% sure that the cube was created how I wanted.
    First I created an AW using AWM where I wanted all the dimensions. Next I populated and ran this script:
    set serveroutput on
    execute cwm2_olap_manager.set_echo_on;
    DECLARE
    v_dimension_owner VARCHAR2(20):='DW_DM_TEST';
    v_dimension_name VARCHAR2(20):='DIM_TEST';
    v_aw_owner VARCHAR2(20):='DW_OLAP';
    v_aw_name VARCHAR2(20):='AW_ORDER';
    v_aw_dimension_name VARCHAR2(30):='AW_DIM_TEST';
    v_AWDimload_spec_name VARCHAR2(20) := 'order_test_load';
    BEGIN
    dbms_aw.AW_ATTACH(v_aw_name,true);
    DBMS_AWM.CREATE_AWDIMENSION( p_Source_Dimension_Owner =>
    v_dimension_owner,
    p_source_Dimension_Name => v_dimension_name,
    p_AW_Owner => v_aw_owner,
    p_AW_Name => v_aw_name,
    p_Target_Dimension_Name => v_aw_dimension_name);     
    DBMS_AWM.CREATE_AWDIMLOAD_SPEC(
    p_AWDimLoad_Spec_Name =>v_AWDimload_spec_name,
    p_Dimension_Owner => v_dimension_owner,
    p_Dimension_Name => v_dimension_name,
    p_LoadType_Name =>'FULL_LOAD');
    DBMS_AWM.REFRESH_AWDIMENSION (v_aw_owner, v_aw_name,
    v_aw_dimension_name, v_AWDimload_spec_name);
    DBMS_AW.AW_DETACH(v_aw_name);
    COMMIT;
    END;
    This also populates the AW dimension. What I noticed if I didn't populate the dimension, the dimension had to be mapped in AWM. So for me it was easier to run this script, and maintain the dimension in AWM again, selecting the delete options for the dimension members.
    I think theres a similar script for cubes also, but not sure if this allows you to set partitioning.
    Hope this answers your question.
    Regards Ragnar

  • How to Open the FailedFilesLog.txt File (statement), and How to Increase the 100 File Limit (question)

    It took us a while to figure this out, so I'm posting this in case it's helpful for someone out there. Plus, I have a question...
    DPM gave the following error for one of our file servers:
    Description: The replica of Volume D:\ on <servername> is inconsistent with the protected data source. Number of files skipped for synchronization due to errors has exceeded the maximum allowed limit of 100 files on this data
    source (ID 32538 Details: Internal error code: 0x809909FE)
    Recommended action: Review the failure errors for individual files from the log file
    \\?\Volume{8492c150-f195-11de-a186-001cc4ef89a0}\B1E9D373-2C03-464E-A472-99BC93DB1E2A\FailedFilesLog.txt and take appropriate action. If some files fail consistently, you can exclude the folders containing these files by modifying the protection group or
    moving the files to another location.
    So, how do you actually open the FailedFilesLog.txt file shown in this DPM alert? What is this path referring to? Well, this is the mount point for the protected server's replica volume on the DPM server, which is mounted under \Program
    Files\Microsoft DPM\DPM\Volumes\Replica\servername\File System. Here you'll see the mount points for all of the server's protected volumes. However, if you try to open one of these mounted volumes
    in Windows Explorer, you'll get Access Denied, even if you have administrator rights. (If someone knows of a way around this, please let me know). As a workaround, you can access this mounted volume in an elevated
    command prompt. Steps:
    Open an Administrator Command Prompt
    Type mountvol <AnyAvailableDriveLetter>: \\?\Volume{VolumeGUID}
    Example:  mountvol m: \\?\Volume{8492c150-f195-11de-a186-001cc4ef89a0}   Note that we're only using the first part of the path to the FailedFilesLog.txt
    file given in the DPM alert, starting from \\? and ending after the
    } character.
    Next, type m: to change to the newly mounted m: drive.
    Then type cd B1E9D373-2C03-464E-A472-99BC93DB1E2A   This is actually a folder name so we're just going into this folder.
    Finally type dir and you should see the FailedFilesLog.txt file. This file can be copied to another location where it's easier to use (i.e. in Windows Explorer).
    Be sure to unmount this volume when you're done by typing mountvol m: /d in the command prompt. (Mountvol reference:
    http://technet.microsoft.com/en-us/library/cc772586(WS.10).aspx.)
    What a pain, eh? But at least by reviewing the FailedFilesLog.txt file you can determine which files or folders caused the sync to fail and thus take action accordingly.
    Now, here's my question: Where is that registry key that lets me adjust the limit of 100 files that DPM allows to be skipped before it fails the replica? Hopefully someone out there will tell me. I know this can be done because Kapil Malhotra
    said so in this post:
    http://groups.google.com/group/microsoft.public.dataprotectionmanager/browse_thread/thread/a179fa30fb50c9b0/e9a348f2a9386063?lnk=raot.
    Also, does anyone know what the internal error code 0x809909FE means in this alert? Knowing this my help us determine what caused these files to fail. Interestingly, in the FailedFilesLog.txt file, it gave a different error code next to each failed file:
    0x80070002.
    -Taylorbox

    Thanks for responding, Fahd. So, just to be sure...
    Do I add this registry key to the DPM server or to the protected servers (or both)?
    In either case, the ContinueOnFailure key does not currently exist. So, I must create this key and the MaxFailedFiles DWORD value
    manually, right?
    Does the server in which I create this regkey have to be restarted for it to take affect?
    Can the DPM alert for the 0x809909FE error event (for exceeding the limit of 100 failures) please be adjusted to provide a path to the FailedFilesLog.txt file
    that actually works if you click on it?
    Any ideas on why the 0x80070002 "File not found" error happened? The files on the server were simply created and then deleted. Why would such file activity lead to this error?
    Thanks,
    -Taylorbox

  • MaxL / INCBUILDDIM - two dimension limit

    I'm trying to build multiple dimensions in a single MaxL 'import database dimensions' command by including multiple 'from xx using rules_file xx' blocks separated by commas. This syntax is as per the documentation.The reason for building multiple dimensions in a single 'import database dimensions' command, rather than using multiple commands is that this achieves the same effect as the ESSCMD 'INCBUILDDIM', where the database is only restructured after ALL the dimensions have been built.MaxL doesn't error, but it only builds the first two dimensions specified and simply ignores any others! Has anyone seen this issue - I'm wondering if it's a MaxL bug (v6.5.1). I've tried different dimensions, changing the order etc. Always the same - builds the first two, ignores anything else.Thanks..

    Assuming that your AW was defined using AWM or OWB (i.e. it is standard form), then you should find two additional objects related to your dimension -- the INHIER valueset and the HIERLIST dimension. For example, if your dimension is named PRODUCT, then you should find
    DEFINE PRODUCT_HIERLIST DIMENSION READONLY LOCKDFN TEXT
    DEFINE PRODUCT_INHIER VALUESET READONLY LOCKDFN PRODUCT <PRODUCT_HIERLIST>
    (These are 11g definitions -- the 10g versions do not have READONLY LOCKDFN.
    The PRODUCT_HIERLIST contains one member for each hierarchy of the dimension, 'H1' and 'H2' say. To limit the dimension to just members in H2 you can say
    LIMIT PRODUCT TO PRODUCT_INHIER(PRODUCT_HIERLIST 'H2')

  • Temporary files and buffersize limit question

    Hello,
    I have two questions :
    1. Buffer limit
    Is there a limit for the buffer to return to a client. I have
    a conversational service routine. If I want to return 50 record ( i.e. view structures
    ) my service routine hangs. If I do 10 records than it is ok. Is there a limit
    size to return. Is there a parameter in UBBconfig to fix this.
    2. Temporary files on conversational client / services
    My service routine creates temporary files in the /tmp directory. Unfortunately
    the mode for /tmp has 't'bit on so users can not delete files from other users.
    The problem is I have a lot of temporary files in /tmp. Does anyone know how to
    fix this or where can you specify not to use temp files or where can you specify
    the directory to create tempfile. de name of files are /tmp/TUXxxxxxx where xxxxxx
    is a random serie of characters.
    Thanks a lot
    Johan den Boer
    email : [email protected]
    [email protected]

    Buffers that are larger than 3/4 of MSGMNB are sent through a file. The name is
    generated with tmpnam(), so you should be able to specify a different directory by
    setting TMPDIR in the environment.
    The preferred solution is to not use file transfer, by setting your IPC parameters
    high enough to pass all of your application messages.
    If your service routine hangs, you should look at a possible application problem.
    How are you packing the VIEWs into a buffer? Are you using embedded FML with
    FLD_VIEW32 fields? That would be the best way.
    Remember that VIEWs are binary structures that are unique to a particular machine
    type. If you pack Views together into your own buffer format, and try to use them
    on a different machine type, then they won't work properly.
         Scott Orshan
    Johan den Boer wrote:
    Hello,
    I have two questions :
    1. Buffer limit
    Is there a limit for the buffer to return to a client. I have
    a conversational service routine. If I want to return 50 record ( i.e. view structures
    ) my service routine hangs. If I do 10 records than it is ok. Is there a limit
    size to return. Is there a parameter in UBBconfig to fix this.
    2. Temporary files on conversational client / services
    My service routine creates temporary files in the /tmp directory. Unfortunately
    the mode for /tmp has 't'bit on so users can not delete files from other users.
    The problem is I have a lot of temporary files in /tmp. Does anyone know how to
    fix this or where can you specify not to use temp files or where can you specify
    the directory to create tempfile. de name of files are /tmp/TUXxxxxxx where xxxxxx
    is a random serie of characters.
    Thanks a lot
    Johan den Boer
    email : [email protected]
    [email protected]

  • Oracle DB 10g and BI Discoverer for OLAP - Dimension Attributes

    Hi,
    We are using Oracle Database 10g release 1 with partitioning, data mining and OLAP options and Analytic Workspace Manager 10.2.0.1.0A to create the multidimensional objects. For the user dimensions created using AWM we have custom attributes like HireDate, StartDate, Sales Personnel Role etc., For reporting purposes we are using Discoverer for OLAP. In this Discoverer version, I don't see an explicit provision to drag these attributes onto the worksheet. We are only able to filter based on these attributes and capture the measures...
    Can someone throw light on this? Also, if there is a possibility to drag these attributes onto the worksheet can that be expounded?
    Thanks in advance!

    Again this depends on what you are trying to achieve. If you define an attribute against a dimension it takes very little space as it is not directly connected to a cube and so no data is stored against that attribute.
    If, however, you have a 4D revenue cube (product, geography, channel, time) with products attributes COLOR and PACK SIZE and you want to view revenue additionally borken-out by COLOR and PACK SIZE as well as the other four dimensions then your schema will require additional storage space. However, 10g compressed cubes and sparsity options do help to manage the explosion of data points as the number of dimensions increases. This should allow you to easily add attributes as dimensions into your cube.
    One thing to remember is that most users start to struggle when confronted with more than 9 dimensions. So although Oracle OLAP can create extremely large dimensional models, users prefer their cubes to have 9 or fewer dimensions.
    Hope this helps,
    Keith
    Oracle Business Intelligence Product Management
    BI on Oracle: http://www.oracle.com/bi/
    BI on OTN: http://www.oracle.com/technology/products/bi/
    BI Beans http://www.oracle.com/technology/products/bib/index.html
    Discoverer: http://www.oracle.com/technology/products/discoverer/
    BI Software: http://www.oracle.com/technology/software/products/ias/devuse.html
    Documentation: http://www.oracle.com/technology/documentation/appserver1012.html
    BI Samples: http://www.oracle.com/technology/products/bi/samples/
    Blog: http://oraclebi.blogspot.com/

  • Oracle OLAP cube build question

    Hello,
         I am trying to build a reasonably large cube (around 100
    million rows from the underlying relational fact table). I am using
    Oracle 10g Release 2. The cube has 7 dimensions, the largest of which
    is TIME (6 years of data with the lowest level day). The cube build
    never finishes.
    Apparently it collapses while doing "Auto Solve". I'm assuming this
    means calculating the aggregations for upper levels of the hierarchy
    (although this is not mentioned in any of the documentation I have).
    I have two questions related to this:
    1. Is there a way to keep these aggregations from being performed at
    cube build time on dimensions with a value-based hierarchy? I already
    have the one dimension designated as level-based unchecked in the
    "Summarize To" tab in AW manager (TIME dimension).
    2. Are there any other tips that might help me get this cube built?
    Here is the log from the olapsys.xml_load_log table:
    RECORD_ID LOG_DATE AW XML_MESSAGE
    1. 09-MAR-06 SYS.AWXML 08:18:51 Started Build(Refresh) of APSHELL Analytic Workspace.
    2. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Attached AW APSHELL in RW Mode.
    3. 09-MAR-06 SPADMIN.APSHELL 08:18:53 Started Loading Dimensions.
    4. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members.
    5. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    6. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ACCOUNT.DIMENSION. Added: 0. No Longer Present: 0.
    7. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    8. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for CATEGORY.DIMENSION. Added: 0. No Longer Present: 0.
    9. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for DATASRC.DIMENSION (3 out of 9 Dimensions).10. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for DATASRC.DIMENSION. Added: 0. No Longer Present: 0.
    11. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for ENTITY.DIMENSION (4 out of 9 Dimensions).
    12. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for ENTITY.DIMENSION. Added: 0. No Longer Present: 0.
    13. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    14. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INPT_CURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    15. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for INTCO.DIMENSION (6 out of 9 Dimensions).
    16. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for INTCO.DIMENSION. Added: 0. No Longer Present: 0.
    17. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RATE.DIMENSION (7 out of 9 Dimensions).
    18. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RATE.DIMENSION. Added: 0. No Longer Present: 0.
    19. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    20. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Finished Loading Members for RPTCURRENCY.DIMENSION. Added: 0. No Longer Present: 0.
    21. 09-MAR-06 SPADMIN.APSHELL 08:18:54 Started Loading Dimension Members for TIME.DIMENSION (9 out of 9 Dimensions).
    22. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Members for TIME.DIMENSION. Added: 0. No Longer Present: 0.
    23. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Dimension Members.
    24. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies.
    25. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    26. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Finished Loading Hierarchies for ACCOUNT.DIMENSION. 1 hierarchy(s) ACCOUNT_HIERARCHY Processed.
    27. 09-MAR-06 SPADMIN.APSHELL 08:18:55 Started Loading Hierarchies for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    28. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for CATEGORY.DIMENSION. 1 hierarchy(s) CATEGORY_HIERARCHY Processed.
    29. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for DATASRC.DIMENSION (3 out of 9 Dimensions).
    30. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Finished Loading Hierarchies for DATASRC.DIMENSION. 1 hierarchy(s) DATASRC_HIER Processed.
    31. 09-MAR-06 SPADMIN.APSHELL 08:18:56 Started Loading Hierarchies for ENTITY.DIMENSION (4 out of 9 Dimensions).
    32. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for ENTITY.DIMENSION. 2 hierarchy(s) ENTITY_HIERARCHY1, ENTITY_HIERARCHY2 Processed.
    34. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INPT_CURRENCY.DIMENSION. No hierarchy(s) Processed.
    36. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for INTCO.DIMENSION. 1 hierarchy(s) INTCO_HIERARCHY Processed.
    37. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RATE.DIMENSION (7 out of 9 Dimensions).
    38. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RATE.DIMENSION. No hierarchy(s) Processed.
    39. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    40. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for RPTCURRENCY.DIMENSION. No hierarchy(s) Processed.
    41. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Hierarchies for TIME.DIMENSION (9 out of 9 Dimensions).
    42. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies for TIME.DIMENSION. 2 hierarchy(s) CALENDAR, FISCAL_CALENDAR Processed.
    43. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Hierarchies.
    44. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes.
    45. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ACCOUNT.DIMENSION (1 out of 9 Dimensions).
    46. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ACCOUNT.DIMENSION. 6 attribute(s) ACCTYPE, CALC, FORMAT, LONG_DESCRIPTION, RATETYPE, SCALING Processed.
    47. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for CATEGORY.DIMENSION (2 out of 9 Dimensions).
    48. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for CATEGORY.DIMENSION. 2 attribute(s) CALC, LONG_DESCRIPTION Processed.
    49. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for DATASRC.DIMENSION (3 out of 9 Dimensions). 50. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for DATASRC.DIMENSION. 3 attribute(s) CURRENCY, INTCO, LONG_DESCRIPTION Processed.
    51. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for ENTITY.DIMENSION (4 out of 9 Dimensions).
    52. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for ENTITY.DIMENSION. 3 attribute(s) CALC, CURRENCY, LONG_DESCRIPTION Processed.
    53. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INPT_CURRENCY.DIMENSION (5 out of 9 Dimensions).
    54. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INPT_CURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    55. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for INTCO.DIMENSION (6 out of 9 Dimensions).
    56. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Finished Loading Attributes for INTCO.DIMENSION. 2 attribute(s) ENTITY, LONG_DESCRIPTION Processed.
    57. 09-MAR-06 SPADMIN.APSHELL 08:18:57 Started Loading Attributes for RATE.DIMENSION (7 out of 9 Dimensions).
    58. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RATE.DIMENSION. 1 attribute(s) LONG_DESCRIPTION Processed.
    59. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for RPTCURRENCY.DIMENSION (8 out of 9 Dimensions).
    60. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Finished Loading Attributes for RPTCURRENCY.DIMENSION. 2 attribute(s) LONG_DESCRIPTION, REPORTING Processed.
    61. 09-MAR-06 SPADMIN.APSHELL 08:18:58 Started Loading Attributes for TIME.DIMENSION (9 out of 9 Dimensions).
    62. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes for TIME.DIMENSION. 3 attribute(s) END_DATE, LONG_DESCRIPTION, TIME_SPAN Processed.
    63. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Attributes.
    64. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Loading Dimensions.
    65. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Started Updating Partitions.
    66. 09-MAR-06 SPADMIN.APSHELL 08:20:26 Finished Updating Partitions.
    67. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Loading Measures.
    68. 09-MAR-06 SPADMIN.APSHELL 08:20:40 Started Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE.
    69. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Finished Load of Measures: SIGNEDDATA from Cube FINANCE.CUBE. Processed 100000001 Records. Rejected 0 Records.
    70. 09-MAR-06 SPADMIN.APSHELL 10:54:06 Started Auto Solve for Measures: SIGNEDDATA from Cube FINANCE.CUBE.

    Hi, I've taken a few minutes to do a quick analysis. I just saw in your post that this isn't "real data", but some type of sample. Here is what I'm seeing. First off, this is the strangest dataset I've ever seen. With the exception of TIME, DATASOURCE, and RPTCURRENCY, every single other dimension is nearly 100% dense. Quite truthfully, in a cube of this many dimensions, I have never seen data be 100% dense like this (usually with this many dimensions its more around the .01% dense max, usually even lower than that). Is it possible that the way you generated the test data would have caused this to happen?
    If so, I would strongly encourage you to go back to your "real" data and run the same queries and post results. I think that "real" data will produce a much different profile than what we're seeing here.
    If you really do want to try and aggregate this dataset, I'd do the following:
    1. Drop any dimension that doesn't add analytic value
    Report currency is an obvious choice for this - if every record has exactly the same value, then it adds no additional information (but increases the size of the data)
    Also, data source falls into the same category. However, I'd add one more question / comment with data source - even if all 3 values DID show up in the data, does knowing the data source provide any analytical capabilities? I.e. would a business person make a different decision based on whether the data is coming from system A vs. system B vs. system C?
    2. Make sure all remaining dimensions except TIME are DENSE, not sparse. I'd probably define the cube with this order:
    Account...........dense
    Entity..............dense
    IntCo...............dense
    Category.........dense
    Time...............sparse
    3. Since time is level based (and sparse), I'd set it to only aggregate at the day and month levels (i.e. let quarter and year be calculated on the fly)
    4. Are there really no "levels" in the dimensions like Entity? Usually companies define those with very rigid hierarchies (assuming this means legal entity)
    Good luck with loading this cube. Please let us know how "real" this data is. I suspect with that many dimensions that the "real" data will be VERY sparse, not dense like this sample is, in which case some of the sparsity handling functionality would make a huge benefit for you. As is, with the data being nearly 100% dense, turning on sparsity for any dimension other than TIME probably kills your performance.
    Let us know what you think!
    Thanks,
    Scott

  • Any setup reqd for Disco for OLAP to attach multiple AWs per schema...

    I have a schema with two AWs - A, B (created the new AW - B recently). <br><br>
    When i log on to Discoverer Plus for OLAP via an App Server (givign the schema user/pass), i do not get to pick the AW B explicitly. <br><br>
    All Measure folders/cubes from both AWs A and B are displayed and i can choose the cube from AW B. However after all the worksheet creation steps are done, the worksheet query fails with the error --<br><br>
    ========================================<br>
    oracle.dss.dataSource.common.OLAPException: null<br>
    java.lang.NullPointerException<br>
    java.lang.NullPointerException<br>
    oracle.dss.dataSource.common.QueryRuntimeException: BIB-9009 Oracle OLAP could not create cursor.<br>
    <br>
    ========================================<br><br>
    D4O Diagnostic test gives the following:<br>
    <br>
    oracle.dss.d4o.common.D4OException: BIB-9009 Oracle OLAP could not create cursor.<br>
    oracle.express.ExpressServerExceptionError class: OLAPI
    Server error descriptions:<br>
    DPR: Unable to execute the query, Generic at TxsOqCursorManager::fetchInitialBlocks<br>
    OES: ORA-34224: Analytic workspace <<schema>>.<<AW_B>> is not attached.<br>
    , Generic at TxsRdbResultSet:absolute()<br>
    <br>
    ========================================<br><br>
    The dimension selections etc (intermediate steps of the wizard) are working on the right data/hierarchies.. So the second AW did get attached.<br><br>
    If i create a worksheet based on cubes present in AW #1: A, then it works fine.<br><br>
    Is there any special settings/program... ONATTACH, AUTOGO kind... which is required while using multiple AWs within a schema?<br><br>
    The schema has olap_user and olap_dba roles as well as access to the Discoverer Catalog via iAS Discoverer Admin.<br><br>
    Thanks in advance for any help.<br>
    Shankar<br>
    Message was edited by: <br>
    user498007<br>
    Trying to make it readable... Didn't retain line breaks. Now it does.<br><br>

    Discoverer for OLAP should see all measures and dimensions that you have access to. You never need to explicitly attach an AW.
    You are correct in thinking that the AW did get attached if you get to a point in the Workbook Wizard where data is displayed. The error messages are not meaningful to mean. I would guess that it is the result of some sort of metadata error. You might see if AW 'A' is causing any sort of confict by revoking SELECT on AW$A (you'll need to do this as a different user than the owner of the AW) an then attempting to view AW 'B'.

  • Dimension design question

    Hello,
    I am new to Dimensional modeling. I have a requirement where I need to create a TIme dimensional with following attibutes
    time_id,system_date, Hours,week_end_indicator,hliday_ind and some more. This dimension was created because of the fact that I am receiving a count file of roadway traffic which hold count for every hours. There is new requirement that some file will have a count after every 5 minutes . My question is how I am going to accomodate this new functionality with the current TIME dimension because there is no column with Time . Should I create a new column in the TIME dimensio or create another Dimension.
    I will apprecdiate your input.
    Thanks
    Suhail

    Hi
    You should first declare the grain of the fact table "http://www.rkimball.com/html/designtips/2001/designtip21.html". You do this by understanding the queries that your users will run against the fact table. Although this sounds obvious it deserves considerable thought and you should determine the technical implications of these queries exactly. Once you understand the demands that the queries will place on the data you should find a way to accomodate the lower grain in either the fact or the dimension. Note that the dimension is not the only place where you keep non additive attributes. This case is an example of instances where a non additive attribute (a date-time stamp) could be carried on the fact.
    The new file will contain data which is of a lower grain than the previous one (minutes versus hour). For this reason you should create a new fact which is at the lower grain - not a new dimension. You can still use the previous "hour" fact as a summary fact. If you aggregate the new fact to vehicles per hour and update the old fact each time you get new data you won't have to change applications that work off the old fact. You should decide whether you would like to lower the grain by adding an attribute to the dimension or whether you would like to keep a date-time attribute on the fact to keep the cardinality of the dimension low.
    I would venture to say that you will probably have to keep at least one date-time stamp or else two such attributes to cater for a start and end time on the fact. Once again it depends on the type of queries your users will run.
    For this purpose I suggest that you read "http://www.rkimball.com/html/designtips/2001/designtip29.html" as well as "http://www.intelligententerprise.com/020613/510warehouse1_1.shtml". Kimball has written some other articles specifically on using date-time stamps in the fact. You could also visit "www.intelligententerprise.com" and search under Data Warehousing for more information. If you still don't feel confident with your sollution you could e-mail Ralph Kimball (from www.rkimball.com).
    Regards

  • ORACLE olap - dimension with huge number of members.

    Hi Experts,
    I have modeling a ORACLE OLAP cube with 10 dimensions. I have one of the dimensions which have huge leaf members...(probably 1 billion).. How many dimension members i can have for one dimension. Is there any limitation to the dimension members ?
    Thanks and Regards
    Siva

    Hi there,
    I'm not sure of the theoretical limits, but I would never recommend trying to build a dimension with 1 billion members.
    In situations were such granular data exists, it is common to leave this in table and load data into the OLAP cube at a more aggregate level.
    Which business entity are you trying to model in a 1 billion member dimension?
    Thanks,
    Stuart Bunby
    OLAP Blog: http://oracleOLAP.blogspot.com
    OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
    OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
    DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html

  • How to use the olap dimension

    hello everybody,
    I had build a project use OWB,but I dont know how to deploy this object to BIEE. I only select which table or view in build the BIEE and must setup the dimension which had been setup in OWB. Could I get the dimension information to the BIEE and dont not build it again in the BIEE.
    Could you give me some good advice.
    thinks

    Hi,
    I guess you should deploy your OWB objects like you normally would do. These OWB objects can be imported to OBIEE. So you do not have to build these objects again.
    Good Luck,
    Daan Bakboord

Maybe you are looking for

  • Directory Highlighted in Red

    I'm new to OSX. What does it mean when a directory in Finder is highlighted in red? Cheers, Kris

  • Adobe shockwave player crashing or hanging continually

    Adobe shockwave player is always crashing when viewing random things. I have tried to update it. But I have the most current copy. I'm running the Firefox 25 and these are the most recent crash reports. 903013f1-4227-4259-96f2-fedaf9fde83a 11/4/2013

  • Really weird sound problem

    I have a 1st gen 32 GB iPod touch running iPhone os 3.1.3. and I am having a very strange problem where I cannot hear any vocal track on movies or music. I can hear any sound effects or music just fine, but talking or singing cannot be heard. On some

  • Stuck on mac os x utilities

    my macbook pro says it has no start up disk and now im stuck on mac os x utilities with four options to use (time machine) or (install moutain lion) or (use internet for help) or (disk utility).

  • Servlet calling another servlet

    hi, I am writing a web-application which requires one servlet (on main server) to call another servlet (on a remote server). The main servlet needs to call the remote one and send some parameters to it. The remote servlet would be sending back XML da