KSCOPE

Gentlemen,
Hope u guys don't mind this approach but i thought regarding ODTUG kscope i can get best answers from this forum :)
After long battle with my manager he allowed me to attend ODTUG Kscope11 but would like to ask what all i can attend from this and how much it cost around could any1 help me in this so that i can sort with my manager and make the Early bird payment
I see series of presentation which comes close to 100 presentation on Hyperion and some trainings scheduled on same time on Hyperion products by different personalities please give some suggestion which is the one i need to or all of them
,(by nature iam a Hyperion developer)
cheers

Congrats on being able to attend Kscope11, it is a wise choice from your manager. As others have said, it all depends on what you want to get out of it. Are you looking to learn new product, gain tips and tricks about what you currently do, or get hands on experience?
The selection of presentations is so vast, it is hard to narrow it down. First, ask yourself, what is your area of expertise and what do you want to get out of the conference? I wold first look at the hands on labs, these are half day sessions with you working though specific products to get a good intro to them. They are filling up fast with them being at 60% capacity already. Signups are on a first come first served basis so the earlier you sign up, the better selection.
As for sessions, there is an enduser track that is more case study-ish, As a developer, I would stay away from those, but there are tons of presentations on HFM, Essbase, Planning, etc that are developer oriented. I would HIGHLY advase getting there for the Sunday symposium with the Oracle product managers who go through what is comming up and the ask the developer panel (I think on Monday). In your little free time, visit with oracle at the kiosks to look at products you are not familiar with and visit the Oracle support room, to get questions answeres (Both new this year). If you don't want to sleep, you can join the Midnight madness on Monday (really 10:00) for some bonding with your Hyperion brethren and if you really don't want to sleep, join the late night Werewolf games.
Plan to leave the conference with your head spinning with new ideas and your body tired but energized with all that you have done.
If you give more specific details on your background and what you want to get out of the conference I'm sure Cameron, I or one of the other contributors to the board can help steer you toward sessions we things are geared toward your goals.
I look forward to meeting you at the conference, please make sure you come up and introduce yourself.
Glenn

Similar Messages

  • Multiple production instances - KScope follow up

    Had this posted on Network 54 and it was recommended that I post here as well.
    During one of the lunch and learn sessions at KScope I texted in a question about multiple instances in production. It seemed there was some confusion to the question so I thought expanding the question with additional details here may provoke some additional conversation on the topic.
    We are currently running a single essbase instance with active - passive clustering in production on 11.1.2.1 (considering 11.1.2.3 for next year). For example purposes lets say we have two business areas sharing the instance each with 50 applications for a total of 100 applications. Things run very smoothly most of the time, but from time to time during peak usages the single threaded nature of the security file can slow things down. My thought was to run two instances of Essbase, one for each business area, one active on server A and one active on server B. They would each have there own failover to the other server.
    Current Set-up
    Server A
    Active
    Instance 1
    100 applications
    Server B
    Passive
    Instance 1
    Proposed Setup
    Server A
    Instance 1 active - 50 applications
    Instance 2 passive
    Server B
    Instance 1 passive
    Instance 2 active - 50 applications
    I have seen that you can setup multiple instances on a server, but have also seen that it is not recomended for production. Are they considering this scenario when they do not recommend it or are there other reasons. I am fine if both instances fail to the same server as we run all applications on one server now and have plenty of memory. Also, we would have separate file systems and a separate port for the second instance (our end users do not know the port bases on their log in today so that will not be an issue). Are there any concerns with this approach, has anyone else tried it? We have successfully done this in our development region, but that does not have the usage on it that production has. It also seems a waste to have all the processors and memory sitting on server B not doing anything - we are paying for it why not use it.
    Some additional information that was requested.
    This is in addition to DR and having failover is required based on internal audit/IT standards.
    We have a NAS device and understand that is still a single point of failure and is likely a bottleneck, which is why we archive any modified databases each night along with packaging up rules and calc scripts.
    The servers are UNIX with 48 cores and 256+ Gig of Ram.
    About half or more of the databases require write back.
    Thanks Andy

    We have implemented similar solutions to take advantage of a server that otherwise be redundant for the majority of the time and it does work well, I have not personally done this type of configuration on unix using OPMN but have on windows using failover clusters, I feel OPMN is a bit poor in terms of management and lacks functionality so maybe worth considering Oracle clusterware depending on the flavour of Unix.
    You could also look at look at virtualisation options as an alternative.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Essbase beginner's track at KScope

    While it is not my intention to spam this board with Kscope informaion, I wanted to inform all of what I think is exciting news. At KScope 12 (June 24th 28th, 2012) we will have a track dedicated to Essbase beginners. subjects like
    Cube design considerations
    Dimension building and data load basics
    Calculation basics
    reporting basics
    tool selection and much more.
    Why am I mentioning all this now? For two reasons.
    1. So people can consider attending and make plans
    2. MORE IMPORTANTLY If you think there is a topic that shouls be covered you can let us know or if you want to present it, to encourge you to submit an abstract. The deadline for abstracts is Friday.
    If you are chosen to present, your conference fee is comped.
    Submit your abstract to https://caat.odtug.com/odtug_caat/caat_abstracts_upd.main?conference_id=90 or go to
    If you do not want to submit for the beginner's track, use the same link to submit for any of the other tracks
    www.kscope12.com to learn more

    To add to Glenn's post, If presenting seems like a daunting task, here are a few guidelines to make the process much easier:
    Here are some ways to make your abstract stand out:
    1.     Be Clear and Concise – Clearly explain in the abstract what the attendee will learn by attending your session. What questions will be answered? Spell it out. Don’t mention the presenter’s name in the abstract or your company’s name. Make your abstract interesting, after all, if your session is accepted, your abstract is what will bring people in to see you speak.
    2.     Pick and Choose – Put a fresh spin on an old problem or offer cutting edge advice? Do both. Submit multiple abstracts for your greatest chance of submission. You are limited to four abstracts, so make your selections carefully. Kscope is where people go to see sessions they can’t see anywhere else, so tailor your presentations to this audience.
    3.     Toot Your Own Horn – In the biography portion, tell us what makes you a technological expert and an expert speaker. This plays a large part in the selection process.
    4.     And of course, Double-check Your Work – Spell check, make sure there are no factual errors, make sure your abstract is high on educational detail while avoiding all marketing speak.
    Regards,
    Robb Salzmann

  • KScope PKGBUILD uploaded to AUR

    Hi,
    I wanted a source navigation tool to browse through the source code.
    I really found kscope very easy to use.
    As the pkg is not in AUR, so I made a PKGBUILD and built the package for myself.
    Just thought why not give back something to the wonderful arch community.
    I have uploaded the PKGBUILD to AUR
    http://aur.archlinux.org/packages.php?d … =1&ID=2949
    Please do check this package and post you comments.
    Regards,
    Abhay

    Hi,
    I wanted a source navigation tool to browse through the source code.
    I really found kscope very easy to use.
    As the pkg is not in AUR, so I made a PKGBUILD and built the package for myself.
    Just thought why not give back something to the wonderful arch community.
    I have uploaded the PKGBUILD to AUR
    http://aur.archlinux.org/packages.php?d … =1&ID=2949
    Please do check this package and post you comments.
    Regards,
    Abhay

  • Which is faster -  Member formula or Calculation script?

    Hi,
    I have a very basic question, though I am not sure if there is a definite right or wrong answer.
    To keep the calculation scripts to a minimum, I have put all the calculations in member formula.
    Which is faster - Member formula or calculation scripts? Because, if i am not mistaken, FIX cannot be used in member formulas, so I need to resort to the use of IF, which is not index driven!
    Though in the calculation script,while aggregating members which have member formula, I have tried to FIX as many members as I can.
    What is the best way to optimize member formulas?
    I am using Hyperion Planning and Essbase 11.1.2.1.
    Thanks.

    Re the mostly "free" comment -- if the block is in memory (qualification #1), and the formula is within the block (qualification #2), the the expensive bit was reading the block off of the disk and expanding it into memory. Once that is done, I typically think of the dynamic calcs as free as the amount of data being moved about is very, very, very small. That goes out the window if the formula pulls lots of blocks to value and they get cycled in and out of the cache. Then they are not free and are potentially slower. And yes, I have personally shot myself in the foot with this -- I wrote a calc that did @PRIORS against a bunch of years. It was a dream when I pulled 10 cells. And then I found out that the client had reports that pulled 5,000. Performance when right down the drain at that point. That one was 100% my fault for not forcing the client to show me what they were reporting.
    I think your reference to stored formulas being 10-15% faster than calc script formulas deals with if the Formulas are executed from within the default calc. When the default Calc is used, it precompiles the formulas and handles many two pass calculations in a single pass. Perhaps that is what you are thinking of.^^^I guess that must be it. I think I remember you talking about this technique at one of your Kscope sessions and realizing that I had never tried that approach. Isn't there something funky about not being able to turn off the default calc if a user has calc access? I sort of thing so. I typically assing a ; to the default calc so it can't do anything.
    Regards,
    Cameron Lackpour

  • Data mining Algorithms in Essbase

    Hi,
    Just wondering if anyone has used data mining algorithms provided within Essbase. Any thoughts or pointers towards more information will be helpful..
    Thanks in Advance !!

    In a 2009 persentation at Kscope from ODTUG titled little used features of Essbase, I went through how to use data moning. It is available on the odtug website. I do know that nothing has been done with the data mining modules in a long time as the team was disbanded since Oracle has other tools to do data mining.

  • EAS does not show Business Rules node

    I'm in the process of validating an 11.1.2 Planning installation. This is the development environment, so one Essbase server, one everything else server.
    One of the issues I've run across is that my client install of EAS does not have a Business Rules node (I am staying away from Calculation Manager because of multiple horror stories from multiple sources I have heard about it).
    However, it does show up in the launched-from-the-web version.
    The web launched release numbers are:
    EAS 11.1.2.0.00.462
    APS 11.1.2.0.0.615
    HBR 11.1.2.0.0.722
    And now this gets really weird. The client launched release numbers are:
    EAS 11.1.1.2.0.00.462
    APS 11.1..2.0.0.615
    HBR *4.1.1*
    Somehow my clean install has magically gotten the last release of Planning 4's Business Rules?
    It sure sounds like something didn't get installed correctly. I was able to find this thread regarding the same problem:
    How to add HBR plugin in EAS console
    In that thread, RahulS wrote:
    To sort this out give a try to copy common/log4j folder from the sever to the client or install the Integration Services Client on the machine where the Administration Services Console is installed.Unfortunately, there are multiple log4j folders and none (that I can see) that have a parent of common. Can anyone give me some guidance as to if this is a good solution and if so, a better hint as to what log4j folder I should grab?
    Thanks,
    Cameron Lackpour
    P.S. I found the install log for EAS and found two errors at the end of the install process, but I believe these are related to writing an icon to the desktop. This doesn't surprise me as this Vista laptop is locked down tight.
    Apr 13, 2011 11:00:02 PM), Install, com.installshield.product.actions.DesktopIcon, err, ServiceException: (error code = -80002; message = "Access is denied.
    (2147942405)"; severity = 0)
    STACK_TRACE: 22
    ServiceException: (error code = -80002; message = "Access is denied.
    (2147942405)"; severity = 0)
         at com.installshield.wizard.platform.win32.Win32DesktopServiceImpl.createDesktopItem(Native Method)
         at com.installshield.wizard.platform.win32.Win32DesktopServiceImpl.createDesktopItem(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.installshield.wizard.service.LocalImplementorProxy.invoke(Unknown Source)
         at com.installshield.wizard.service.AbstractService.invokeImpl(Unknown Source)
         at com.installshield.product.service.desktop.GenericDesktopService.createDesktopItem(Unknown Source)
         at com.installshield.product.actions.DesktopIcon.install(Unknown Source)
         at com.installshield.product.service.product.PureJavaProductServiceImpl.installProductAction(PureJavaProductServiceImpl.java:2969)
         at com.installshield.product.service.product.PureJavaProductServiceImpl$InstallProduct.getResultForProductAction(PureJavaProductServiceImpl.java:8048)
         at com.installshield.product.service.product.InstallableObjectVisitor.visitComponent(Unknown Source)
         at com.installshield.product.service.product.InstallableObjectVisitor.visitInstallableComponents(Unknown Source)
         at com.installshield.product.service.product.InstallableObjectVisitor.visitProductBeans(Unknown Source)
         at com.installshield.product.service.product.PureJavaProductServiceImpl$InstallProduct.install(PureJavaProductServiceImpl.java:7199)
         at com.installshield.product.service.product.PureJavaProductServiceImpl$Installer.execute(PureJavaProductServiceImpl.java:5240)
         at com.installshield.wizard.service.AsynchronousOperation.run(Unknown Source)
         at java.lang.Thread.run(Thread.java:619)
    (Apr 13, 2011 11:00:02 PM), Install, com.installshield.product.actions.DesktopIcon, err, ServiceException: (error code = -80002; message = "Access is denied.
    (2147942405)"; severity = 0)
    STACK_TRACE: 22
    ServiceException: (error code = -80002; message = "Access is denied.
    (2147942405)"; severity = 0)
         at com.installshield.wizard.platform.win32.Win32DesktopServiceImpl.createDesktopItem(Native Method)
         at com.installshield.wizard.platform.win32.Win32DesktopServiceImpl.createDesktopItem(Unknown Source)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.installshield.wizard.service.LocalImplementorProxy.invoke(Unknown Source)
         at com.installshield.wizard.service.AbstractService.invokeImpl(Unknown Source)
         at com.installshield.product.service.desktop.GenericDesktopService.createDesktopItem(Unknown Source)
         at com.installshield.product.actions.DesktopIcon.install(Unknown Source)
         at com.installshield.product.service.product.PureJavaProductServiceImpl.installProductAction(PureJavaProductServiceImpl.java:2969)
         at com.installshield.product.service.product.PureJavaProductServiceImpl$InstallProduct.getResultForProductAction(PureJavaProductServiceImpl.java:8048)
         at com.installshield.product.service.product.InstallableObjectVisitor.visitComponent(Unknown Source)
         at com.installshield.product.service.product.InstallableObjectVisitor.visitInstallableComponents(Unknown Source)
         at com.installshield.product.service.product.InstallableObjectVisitor.visitProductBeans(Unknown Source)
         at com.installshield.product.service.product.PureJavaProductServiceImpl$InstallProduct.install(PureJavaProductServiceImpl.java:7199)
         at com.installshield.product.service.product.PureJavaProductServiceImpl$Installer.execute(PureJavaProductServiceImpl.java:5240)
         at com.installshield.wizard.service.AsynchronousOperation.run(Unknown Source)
         at java.lang.Thread.run(Thread.java:619)P.P.S. In the easconsole-move-hbrcfile-frtemp-stderr.log file, I see the following error:
    The system cannot find the file specified.This sort of suggests that indeed something died on install.
    Edited by: CL on Apr 14, 2011 2:36 PM
    Edited to clean up RahulS' quote.
    Edited by: CL on Apr 14, 2011 3:50 PM
    Error file contents added

    There you go I was on the right track, who needs oracle support :)^^^Isn't this board a sort of unofficial Oracle support? I have to admit that after my first fruitless search of the KB on support.oracle.com, I purposely came here first before I had the client log a SR.
    That's wrong, of course, or at least a little illogical, given that the customer pays for support and after all, answering these kinds of questions is sort of their primary job. Putting on my ODTUG hat, KScope 2011 is going to have a support symposium that is going to focus on process (like why is Cameron so brain-dead that he can't type 'EAS and node' into the KB search field which pops up the obvious answer?) so that people don't do a time waster like I just did. It's going to have some pretty big names and I hope you all can come.
    I'm going to forward this thread to my contact at Oracle support as evidence that even "intelligent" people could use some help with how to do searches, no matter how elementary that may be.
    Regards,
    Cameron Lackpour
    P.S. If I could make the background of this post have a color, it would be blush pink, to match the tenor of my cheeks. I can be a total idiot some times. Thanks John and Rahul.

  • Which is the maximum allowable size of a BSO cube?

    Hi Experts!
    Can anyone tell me if it is possible to have a BSO cube of between 1 and 4 Terabytes_ and if it can be tuned for acceptable performance with about 100 concurrent users?
    Sorry if I look stupid with this question but this is my real important question now! Thank you so much in advance!

    The largest BSO cube I have seen is 400 GB. I was just at the Kscope conference speaking with one of the long time Oracle Essbase support reps and asked her the largest BSO cube she had seen and it was 500 GB. I would not advise doing anything beyond 1 TB even on modern hardware.
    Anyone else have opinions on largest supportable BSO cube?
    One thing to understand on huge cube sizes is when/if you need to do a dense restructure it will take a long long time which leads you into a process of exporting level 0 data, clearing the cube, updating the hierarchy, loading level 0, re-calcing the cube which again on a large cube can take a lot of time.
    Data of this size is many times better placed in a ASO cube presuming it can meet the business requirements.
    Regards,
    John A. Booth
    http://www.metavero.com

  • Drill through report - Essbase Studio

    Hi,
    Need some help on drill through reports.
    I have created a drill through report in essbase studio. I am able to drill down to the source table in SmartView. But the problem is, when I drill, it is showing me all the records in the table instead of showing me the records only for the intersection that I am drilling.
    Regards,
    Ragav.

    You have to trick Studio into thinking the cube was built from Studio. In my session on Advanced Studio Tips and Tricks at the KScope conference at the end of June I detail a couple of ways to do this. It is too detailed to put into a post here

  • Essbase smartview retrival versus excel add in

    Hi all,
    I have some performance issues with smartview. I am running a simple querry through smartview having one attribute dimension on the excel sheet, it runs for couple of minutes and throws me a time out error.
    I tried the same querry using excel addin for essbase and it comes back in a second. I noticed the huge difference between these two.
    Are there any configuration setting specific to essbase or provider services other than net retry count or net delay to improve the performance.
    Thanks

    In our experience, we don't see a significant difference between embedded mode (a/k/a direct connections) and APS mode connections.. In Dodeca, we are seeing retrieval speeds exceeding the classic add-in. Further, one of my friends that I saw at Kscope, who has written some custom Essbase Grid API functionality for his company, confirmed that he is seeing faster than classic add-in speeds as well.
    Note: Smart View retrieves are not as fast as the classic add-in, particularly when the retrieves are large. The Smart View team is aware of the issue which is caused, in part, by the XML format and the associated processing.
    Tim
    Tim Tow
    Applied OLAP, Inc

  • SmartView 11.1.2 not remembering the last selected Planning page member

    Am I missing something here?
    In Planning web forms, if I have a dimension called "Cost Center" in the Page section of the POV of Form A, and I select Cost Center 12345, and then navigate from form A to form B, and form B has Cost Center in the Page section of the POV, Cost Center 12345 is still selected.
    This is standard Planning behavior and is selected in the Preference "Remember selected page members".
    However, when I do this in SmartView, all of my POV selections are reset to the last level zero member, every time, for every dimension. I am using IDescendants(dimname) to populate my Page selectors, and then using security to limit the members.
    I am logged in as the application owner.
    The version of Smartview is 11.1.2.0.0.1 (Build 003).
    The connection to Planning is shared.
    I have tried this on ad-hoc and "normal" forms; the behavior is the same.
    Someone please tell me I'm overlooking some incredibly obvious setting. If you come to KScope, I will gladly buy you the beverage of your choice.
    Regards,
    Cameron Lackpour

    Hi Cameron,
    I can confirm seeing this as well. It reminds me of the (IMHO) quirky behavior of the Copy POV functionality. It seems like a functionality disconnect somehow.
    This sounds like a a good subject to discuss at the Kscope Symposium! :)
    http://essbaselabs.blogspot.com/2011/04/smart-view-11121-some-cool-new-features.html?showComment=1303223330470#c6047535796462561596
    Regards,
    Robb Salzmann

  • Variable ID is undefined. Setting ?id to none

    Hi,
    I need to set me variables so that when i go to the index.cfm
    page the index page diplays. Currently I get this error " Variable
    ID is undefined." because I have not defined the variable in the
    URL. I want an include file to be displayed in the index.cfm page
    when index.cfm?id=saex is called.
    Does anyone know how to have these variables in CF and still
    able me to display just the index.cfm file. Hope I explained myself
    well.
    Thanks for your help.
    Luc
    <cfif id IS 'home' >
    <cfset url = "test.html">
    </cfif>
    <cfif id IS 'saex' >
    <cfset url = "sa-explorer.html">
    </cfif>
    <cfif id IS 'ckscope' >
    <cfset url = "cape-kscope.html">
    </cfif>

    Hi,
    You might like to grab a few books on ColdFusion and have a
    read of them, or have a look through
    >> LiveDocs
    << to get a better understanding of how to build
    ColdFusion apps.
    Here's an example of how you might set things up:

  • ASO update of attribute associations causes Data to be "converted"?

    Has anyone seen the issue in 11.1.2.3(.506) where associating attributes  with a dimension causes Essbase to reorganize the data (it is referred to as convert)?
    e.g.
    import database app.db dimensions connect as user identified by password using server rules_file 'DimFuel' on error write to '/essapp/subject/sos/logs/sosp04_dims.build_dim_sosp04_sosp04_dimopen.20150429011053.err':
      OK/INFO - 1270086 - Restructuring converted [5.72103e+08] input cells and removed [0] input cells.
      OK/INFO - 1270087 - Restructuring converted [32] and removed [0] aggregate views.
      OK/INFO - 1007067 - Total Restructure Elapsed Time : [1944.88] seconds.
    Previoiusly this process ran in 3 seconds in 11.1.2.2(.100)

    I agree with everything that Tim has said.  Let me elaborate some more that might be helpful: 
    The fact that agg views are not based on query tracking makes no difference in the analysis.  Query tracking only affects WHICH views are selected.  Once views are selected by whatever means they are handled in the same way.  Is there a reason you think otherwise?
    Let's divide the question into two parts: 
        1.  What is a restructure and Why is a restructure needed?
        2.  Why must the agg views be converted
    But first realize there are two possibilities concerning WHICH agg views actually exist:
      a. All views might be on the primary hierarchy only.  (You told Essbase to consider alternate hierarchies when creating aggregates) - let's call that the Agg_Primary option
      b. Views might be based on both primary and alternate (which includes attribute) hierarchies - let's call that the Agg_Any option
    All of this is is discussed in my Chapter "How ASO Works and How to Design for Performance" in the book "Developing Essbase Applications" edited by Cameron Lackpour.  You will also find a discussion of the bitmap in the section of the DBAG entitled "An aggregate storage database outline cannot exceed 64 bits per dimension" (Essbase Administrators Guide Release 11.1.2.1 Chapter 62 page 934). and in a presentation I made at Kscope 20120 which you can find on ODTUG.com
    1.  Why a restructure?  As Tim says the outline has changed and anytime the number of levels per hierarchy or the width of the hierarchy changes then the coding system used for the data changes.  OK what do i mean by this?  The binary system by which each piece of data in your cube is described is called the "BitMap".  In actuality, this bitmap only reflects the data's position in the primary hierarchy for each dimension.  The primary hierarchy for a specific dimension is not necessarily the first hierarchy seen in your outline.  It is the "widest" hierarchy - the one requiring the greatest number of bits to represent all Level 0 (L0) members found in that hierarchy and each L0's full ancestry within that hierarchy.
    If you read the references above you will see that the number of bits used to determine the "widest" hierarchy is a function of the number of levels and the size of the largest "family" in a hierarchy.  A hierarchy of 5 levels where the largest family has 17 members will require 3 bits more than a hierarchy of 5 levels with 4 members in the family (17 requires 5 bits and 4 requires 2 bits).  So you can see that any time you add more members you could be causing the size of the largest family to exceed a power of 2 boundary.
    Additionally, if the primary hierarchy is NOT all inclusive - i.e. it contains all L0 members for that dimension then you have to add a sufficient number of bits to enumerate the hierarchies.
    So, in summary changes to the width or the height (number of levels) will require a restructure forcing the the identifying bits on every piece of data to be updated.
    In the case the OP mentions where the ONLY changes are to add attribute associations you normally would NOT expect to see a change in the bitmap due to the number of bits required.  This is because attribute dimensions can never have new unique L0 members (you can not associate to something that does not exist).  If you go thru the math (and realize that the bitmap for an attribute dimension does NOT have consider size of the L0 families of the primary hierarchy - or whichever Level the association is based on) you will find that there is no possible attribute dimension that can require more bits than the primary hierarchy.  UNLESS you have been sloppy and you have a primary dimension with x L0 members and a very large secondary hierarchy with x-n members UNIQUE L0 members (i.e. L0 members not appearing in the primary hierarchy).
    So the answer is to inspect the statistics tab before and after your addition of member associations.
    Note - (and this does NOT pertain in the OP's Case) a similar situation exists if all secondary hierarchies contain ONLY members that already exist within the primary hierarchy.  However, remember that the FIRST hierarchy in your outline is not necessarily the primary hierarchy.  Most people however consider the first hierarchy as the de-facto primary.  If one is in a hurry and adds the member to that first hierarchy but does not add it to ALL of the alternate hierarchies (one of which is the true primary), then bits will have to be added to each piece of data to enumerate the hierarchies - thus triggering a restructure.
    Finally, I am relying on the OP's description of the conditions where it was stated that ONLY associations were added and that no upper level attribute members created.  If upper level attribute members are added it is possible that the number of levels for the attribute dimension is changed.  In this case a mini restructure would be required - one that would not change the bitmap on every data item but a change to the mapping table that relates alternate hierarchies to the primary hierarchy.  Note the existence of this table and its exact structure is not acknowledged by Oracle - I (Dan Pressman) have postulated it as one possible implementation of the observed functionality.
    2.  Aggregate View Conversion:  Each data item is tagged not only with a bitmap indicating its position within each primary hierarchy but with a "View ID".  This is the number that is seen in the ASO version of the .CSC file created whenever the view definition is saved.  The input data is always identified by a View ID of 0.  The view ID of other views is a function of the number of levels and the bits required therefore of ALL hierarchies of ALL dimensions.  Therefore any restructure will require that the View IDs of all aggregate views be translated (or converted) from the old scheme to the new scheme.  Note that this is purely a translation and no aggregation is required.
    Please excuse me if I post this now and add some more later on this second question - actually let me know if anyone has actually read this far and is interested.
    Please note in the above whenever i have said "Each data item" I am referring to the situation where no dimension has been tagged as the "compression" dimension.  If a compression dimension has been used then replace the phrase "Each data item" with the phrase "each data bundle".  I will leave it to the reader to find the section of the DBAG that describes compression and data bundles.

  • Dynamic Time Series in ASO

    We have time hiearchy like the below is ASO, we are using essbase 11.1.2.2
    Time
    2011
    First Half 2011
    Q1 2011
    Jan 2011
    Week 01/01/2011
    2011-01-01
    2011-01-02
    Week 01/03/2011
    2011-01-03
    2011-01-04
    2011-01-05
         2011-01-06
    2011-01-07
    2011-01-08
    2011-01-09
    We have this hiearchy continuing for 5 years same like the above. How can i acheive QTD, MTD, WTD and DTD in ASO. Kindly help me out.
    Regards,
    AS

    I'd also look at Dan Pressman's 2010 Kscope presentation. I think even Gary agrees that using hierarchies rather than 'straight' MDX is usually preferable (based on his remarks in 'Developing Essbase Applications', which is another excellent source for the details of both Crisci and Pressman solutions).

  • Essbase Studio Drill-Through Report

    Is it possible to run an Essbase Studio drill-through report in a SmartView spreadsheet that is connected to a cube that was not deployed by Essbase Studio? If so, how do you associate the drill-through report with the cube?

    You have to trick Studio into thinking the cube was built from Studio. In my session on Advanced Studio Tips and Tricks at the KScope conference at the end of June I detail a couple of ways to do this. It is too detailed to put into a post here

Maybe you are looking for