Essbase cube refereshment with minimal down time

Database Application Development
HI All,
I have one requirement. At present I have one cube in my essbase application. Essbase is the source for report.
I need to refresh the cubes daily.
Please guide me how I can refresh the cubes with out stopping users.
Thanks !
EB

The best way is to understand the peak usage hours of each application by specific users and refresh when they are not being used and when the usage becomes low.That way you can refresh multiple times understanding the usage hours and without having to kick out the users.
However what John shared is a more worthy approach with the only exceptions to the modifications being made to the cubes.
I think I will be waiting for the next blog post which expands this topic !
Cheers,
A.S.

Similar Messages

  • Minimal down time

    We have an Exchange 2010 Server and are looking to migrate to Exchange 2013. I have setup new server and everything is working. I did move one mailbox and it does send and receive email. My concern is down time. We would like as little down time as possible.
    Our exchange 2010 server has the SSL Certificate. Do I move the certificate to the new server, migrate all mailboxes and swap IP addresses on the servers?
    Of do I get a new certificate for the Exchange 2013 server, migrate mailboxes, setup another mx record and add entries with Registrar and firewall?
    Thank you for the assistance.

    Hi,
    Internal, we send and receive emails by using Exchange self-certificate. So when we move one mailbox from Exchange 2010 to 2013, we can send and receive emails successfully.
    External, we need another certificate to perform sending and receiving.
    If we don't want change the external URL, and do a proxy to Exchange 2010 via Exchange 2013.
    We just let the external URL point to Exchange 2013. 
    And export the certificate from Exchange 2010, import it to Exchange 2013.
    Then add the Exchange 2013’ FQDN to the certificate.
    If we want to migrate to Exchange 2013 from Exchange 2010, then delete Exchange 2010 completely. 
    I am afraid we need a new certificate.
    Feel free to contact me if there is any problem.
    Thanks
    Mavis
    Mavis Huang
    TechNet Community Support

  • Essbase Cube Linking With Sql

    Hi,experts!!!!!!!
    I have Essbase Studio From Epm 11.1.2.1 Version,the thing is that i have to link the Essbase cube to sql,can you suggest any solutions or can just give me info about drill back method in essbase studio...

    The documentation on drill-through is a good place to start http://docs.oracle.com/cd/E17236_01/epm.1112/est_user/ch17.html
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Shared Services access to Essbase cubes

    I'm having several problems with user access to Essbase (non-Planning) cubes. The first problem is that I have granted Read access to the users (using a group) to these Essbase cubes in Shared Services. The security has been refreshed, and if you view the properties of any of the databases to which I've granted access, it shows that the group has read access. However, the users do not see these applications in Smart View Data Source Manager. Does anyone know why they are not showing up in Smart View? Using my own id, I added all of these applications to the pre-defined view, and if the users use the pre-defined view, they can see the databases, but if they try to connect it says they don't have access.
    The second issue is that only the Essbase cubes that existed at the time that we upgraded to version 11 are showing up in Shared Services. We have since created several more cubes, but I am unable to get them to show up in Shared Services. What am I missing? Shared Services Help is not particularly helpful in either of these instances.
    Thanks,
    Sabrina
    P.S. Just venting, but I hate Shared Services.

    John,
    I have two problems with your suggestion. The first is that when I go to the "Assign Access Control" page, I can't select users/groups, but can only select users. I could do each user separately, I suppose, but doesn't that sort of defeat the purpose of having groups? I'm on version 11 - do you know of any reason why I can't choose groups at this point?
    My second problem is that I can't set them as essbase/planning users, if by that you mean there should be a choice that is both. I can do either one, but not both together. They are currently set to Planning. Will it mess anything up with their Planning access if I set them to Essbase?
    Thanks,
    Sabrina

  • Change channel group mode without down time.

    Hi All,
    I'm trying to find a way to change the channel group mode on our access switch from mode on to mode active.
    The port on access switch and core switch are configured like this : channel group X mode on
    The goal is to change this config to channel group X mode active with no down time of the switch.
    Please let me know if you have any ideas to do that.
    Fabien.

    I don't think its possible to change a Channel Group Mode without downtime.
    Changing from On to Active will cause the port to start sending LACP messages in order to form an LACP Port-Channel.
    As you are taking a link down, I suspect spanning-tree will also try and reconverge.
    The downtime should be fairly minimal though, do you have a maintenance window?

  • Increased calc time in Essbase cube

    Hello-
    I have an Essbase cube that just recently started experiencing very long calc times. The cube has been in a 9.3.1 environment for approximately 4 months and there haven't been any major additions to the data. Just adding Actuals every month. I had a CalcAll running in approximately 2 minutes and now it's running in 2+ hours. This essentially happened overnight. The size of the page file is 267Mb and index file is 25Mb. The data cache and index cache are currently set at 400,000Kb and 120,000Kb respectively.
    This is a 7 dimensional cube with 2 dense dimensions... Accounts and Time. My block size is high due to the large amount of level 0 Accounts... 215,384Kb.
    The number of index page writes and data block reads & writes is pretty high... 128,214 - 3,809,875. And the hit ratio on the data cache is .07.
    I've tried adjusting the data cache and index cache both up and down... but with no success.
    Any help would be appreciated.
    Thanks!

    Here are a couple of things to think about
    How big is the databasename.ind file? How often do you restructure the database? if it is large, it means the database is fragmented (you can also look at the database stats to help figure it out) Every time you runn the calc all, you cause fragmentation.
    If you have agg missing off it also increases the calc time. Also, you will find that calculation a loaded cube takes longer than if you have a cube with just level zero data.
    If your database is not too big, you might try exporting level zero data, clearing the database, reloading the data and doing your calc all. (of course I would try this on a copy of the production database not the actual database itself). You might fined it quicker than your straight calc.
    Since you do a straight calc all, you might also consider converting this to an ASO cube, you really get rid of the call all in total.

  • View Display Error in OBIEE with Essbase Cubes

    Hi All,
    Currently we are generating the Reports from Essbase Cubes.
    We have an hierarchy in OBIEE and when we are trying to drill down one hierarchy(Tech Executive) we are getting below Error.
    " Error
    View Display Error
    Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 96002] Essbase Error: Unknown Member [H1_Rel_TecExec_Ini_appl].[Unknown] used in query (HY000)
    SQL Issued: SELECT s_0, s_1, s_2, s_3, s_4, s_5 FROM ( SELECT 0 s_0, "Defect7M"."H1_Rel_TecExec_Ini_appl"."H1 Initiative Name" s_1, "Defect7M"."H1_Rel_TecExec_Ini_appl"."H1 Tech Executive" s_2, SORTKEY("Defect7M"."H1_Rel_TecExec_Ini_appl"."H1 Initiative Name") s_3, SORTKEY("Defect7M"."H1_Rel_TecExec_Ini_appl"."H1 Tech Executive") s_4, "Defect7M"."Defect7M#1"."Defect7M - measure" s_5 FROM "Defect7M" WHERE ("Defect7M"."H1_Rel_TecExec_Ini_appl"."H1 Tech Executive" = 'Unknown') ) djm "
    Can someone assist me how to resolve this error
    Thanks,
    SatyaB

    Satya,
    Have you done anything to modify the essbase drill logic within your BMM?
    Remember when modeling essbase you should just try to use the defaults first to ensure that all works correctly the first time through. Then you can adjust any hiearchies, federate, etc.

  • Workspace error when drilling down on Essbase Cube

    An Interactive Reporting Service error has occurred.-Failed to acquire requested service.
    (2001)
    We're trying to create OLAP queries in IR to deploy through workspace. When I drill down on any dimension, after deploying to workspace, I get the generic error above. Has anyone seen this error?

    I have created one report using Essbase cubes in OBIEE11G and when I am archiving the same report at one local server and unarchiving it at some other server then at the other server I am facing this error.
    Any replies will indeed be helpful.

  • Combining relation facts with dimensions from an Essbase cube

    Hi!
    I am having trouble combining relational measures (from EBS) with dimensions from an Essbase cube. The dimensions that we want to use for reporting (drilling etc) are in an Essbase cube and the facts are in EBS.
    I have managed to import both the EBS tables and the cube into OBIEE (11.1.15) and I have created a business model on the cube. For the cube I converted the accounts dimension to a value based dimension, other than that it was basically just drag and drop.
    In this business model I created a new logical table with an LTS consisting of three tables from the relational database.
    The relational data has an account key that conforms to the member key of the accounts dimension in the Essbase cube. So in the accounts dimension (in the BMM layer) I mapped the relational column to correct column (that is already mapped to the cube) - this column now has two sources; the relational table and the cube. This account key is also available in the LTS of my fact table.
    The content levels for the LTS in the fact table have all been set to detail level for the accounts dimension.
    So far I am able to report on the data from the fact table (only relational data) and I can combine this report with account key from the account dimension (because this column is mapped to the relational source as well as the cube). But if expand the report with a column (from the accounts dimension) that is mapped only to the cube (the alias column that contains the description of the accounts key), I get an error (NQSError 14025 - see below).
    Seeing as how I have modeled that the facts are connected to the dimension through the common accounts key, I cannot understand why OBIEE doesn't seem to understand which other columns - from the same dimension - to fetch.
    If this had been in a relational database I could have done this very easily with SQL; something along the lines of select * from relational_fact, dim_accounts where relational_fact.account_key=dim_accounts.account_key.
    Error message:
    [nQSError: 14025] No fact table exists at the requested level of detail
    Edit:
    Regards
    Mogens
    Edited by: user13050224 on Jun 19, 2012 6:40 AM

    Avneet gave you the beginnings of one way, but left out that a couple of things. First, you would want to do the export of level zero only. Second, the export needs to be in column format and third, you need to make sure the load rule you use is set to be additive otherwise the last row will overwrite the previouse values.
    A couple of other wats I can think of doing this
    Create a replicated partition that maps the 3 non used dimensiosn to null (Pick the member at the top of the dimension in your mapping area)
    Create a report script to extract the data putting the three dimensions in the page so they don't show up.
    Use the custom defined function jexport in a calc script to get what you want

  • Problem with drill down in time dimension - OBIEE 11G

    Hello There,
    I have a problem with drill down in time dimension. The hierarchy for time dimension is " Fiscal Year---> Fiscal Quarter---> Month(Name)--->Date". When I select a Time dimension and click results its getting opened in a Pivot table view. The problem here is, when I click the "Total" its getting drilled down to Year ---> Quarter but when I click on "+ sign next to quarter" it should drill down to month for that particular quarter for that particular year but its drilling down to month for that particular quarter for all years.
    Any suggestions are much appreciated.
    Thanks,
    Harry.

    1.) Congrats for resurrecting a year-old thread.
    2.) Your answer is here: "Check the level key of the quarter level...it should include both quarter and year columns. Since a specific quarter occurs every year, quarter column alone can't be used as the level key."

  • Issue in integrating Essbase cubes with OBIEE

    Hi
    I am trying to use Essbase cubes as datasource in OBIEE for generating reports,but the issue is in generating , No columns in fact table of cube in BMM layer.
    Outline of cube is
    Revel(cube)
    (Hierachies)
    Time Time <5> (Label Only)
    Item <54> (Label Only) (Two Pass)
    DepInst <20> (Label Only)
    SFA_Flag <2>
    Deduction_Flag <2>
    Rating_Category <6>
    PD_Band <9>
    Product <17>
    Entity <4>
    CR_Agency <5>
    I am confused how to generate reports without measures in fact table.
    Regards
    Sandeep

    Hi Sandeep,
    in that case it's as I thought:
    Or did you just not specify any measure hierarchy?You tried this...
    In BMM layer i made this dimension as fact and tried to create reports but not use....which isn't the way. First of all your cube seems to be built quite bizarre since it doesn't even provide a default measure hierarchy so I'd have your Essbase guys check that.
    As for the OBIEE side: the key is the physical layer. BMM's already too late. In the physical cube object, you must define one of the hierarchies as the measure hierarchy (since your cube doesn't seem to provide it; see above):
    [http://hekatonkheires.blogspot.com/2010/02/obieeessbase-how-to-handle-missing.html]
    Cheers,
    C.

  • Integrating Essbase cubes with Oracle Tables in BI Server

    I'm trying to link together data from an aggregated Essbase Cube with some static table data from our oracle system. Both the essbase and oracle data exist correctly in their own right in the physical, business and presentation levels. Aggragted data is client sales, static data is client details
    Within the OBIEE Administration tool I've tried to drag the physical oracle table for clients onto the clients essbase section in the business area, and it seems to work OK, until you try and report on them together and I get the following error:
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 42043] An external aggregate is found in an outer query block. (HY000)
    Can anyone advise on what I'm doing wrong?
    Thanks

    Thanks Christian I found some very useful articles (one or two by you) - I'll have to look harder on the net before posting.
    One thing I found out, with respect to vertical federation, that others may benefit from, is that I fuond it much easier to start from the most detailed level and then attach the less detailed source rather start with the less detailed and add the additional on.

  • Exporting set of measures with aliases into essbase cube

    We were wandering if there is an easy way to import bunch of measures with aliases into Essbase cube rather then entering them one by one?
    thank you

    Simplest way, would be to create a spreadsheet with the Measures and alias listed in column format.
    You can organize them in Generation, Level or Parent-Child format.
    Create a dimension load rule that maps the columns to properties in Essbase.
    When you want to modify the Measures dimension layout, change the spreadsheet and rerun the load rule.
    Brian Chow

  • OBIEE 11.1.1.6 with Essbase Cube

    Hi,
    Anyone using Essbase cube as data source for OBIEE 11.1.1.6? which port should be open in Essbase server for OBIEE admin tool to import metadata?
    When we use publisher for reporting, it is very slow with Essbase cube? anyone has issues here also
    Thank you.
    BCHK

    Hi,
    By default port no :1423 (Essbase) and for obiee (9704)
    can you try to use 1423 also try
    To finding the port no just login to your weblogic console(FMW) the
    http://IP:7001/console ---> then go to Home-->servers-->here you can Admin Server/bi_Server are listed out with port no.just find it.
    P.S: we just open firwal for below port no just fyi..
    TCP(9703,9704,9705,9710,9810,80,443,1433,1423,8080,389,7001,7002,7003) ports
    Thanks
    Deva
    Edited by: Devarasu on May 29, 2012 5:19 PM

  • DOWN TIME WITH OPS

    제품 : ORACLE SERVER
    작성날짜 : 1995-04-13
    Subject : DOWN TIME With OPS
    I can only answer in a generic format. First, to understand how long fail
    over will take, you must first understand the different operations that
    takes place on fail over. My short answer would be, no, a 5 second average
    down time is not feasible. My long answer is, it depends:
    1) How long does it take for the other node(s) to realize that a node in
    the cluster is down?
    2) How long it takes to "remaster" the locks?
    3) How long does it take for Oracle to perform instance recovery?
    All of these play a part on how long fail over will take. My testing on fail over
    has been focused on Pyramid's, but I believe it should be the same.
    #1 is a tunable parameter - how often do you want each node in the cluster      
    to send a heart beat across to the other nodes? On our machines, it's
         set to 20 seconds before cluster rebuild and remastering of locks
         occur if a node goes down.
    #2 Only takes minutes at most. I've been testing remastering of locks any
         where between 1000 locks -> 200,000 locks and worst case scenario
         (remastering all 200,000 locks) has only taken less than 4 minutes.      
    Setting gc_db_locks = 1000 only took seconds to remaster the locks.
    #3 is probably the most time consuming. It's really dependent upon how much
         redo has been generated and I don't have any timing statistics on
         this.
    A few things that we need to realize is that not all the locks may need to be
    remastered - only the locks which were found "dubious" so the timing
    statistics I gave on that are worst case scenario's for remastering locks.
    Also, there's a possibility of absolutely no wait time if the data block
    you're trying to access is mastered by a node which is up and running.

    제품 : ORACLE SERVER
    작성날짜 : 1995-04-13
    Subject : DOWN TIME With OPS
    I can only answer in a generic format. First, to understand how long fail
    over will take, you must first understand the different operations that
    takes place on fail over. My short answer would be, no, a 5 second average
    down time is not feasible. My long answer is, it depends:
    1) How long does it take for the other node(s) to realize that a node in
    the cluster is down?
    2) How long it takes to "remaster" the locks?
    3) How long does it take for Oracle to perform instance recovery?
    All of these play a part on how long fail over will take. My testing on fail over
    has been focused on Pyramid's, but I believe it should be the same.
    #1 is a tunable parameter - how often do you want each node in the cluster      
    to send a heart beat across to the other nodes? On our machines, it's
         set to 20 seconds before cluster rebuild and remastering of locks
         occur if a node goes down.
    #2 Only takes minutes at most. I've been testing remastering of locks any
         where between 1000 locks -> 200,000 locks and worst case scenario
         (remastering all 200,000 locks) has only taken less than 4 minutes.      
    Setting gc_db_locks = 1000 only took seconds to remaster the locks.
    #3 is probably the most time consuming. It's really dependent upon how much
         redo has been generated and I don't have any timing statistics on
         this.
    A few things that we need to realize is that not all the locks may need to be
    remastered - only the locks which were found "dubious" so the timing
    statistics I gave on that are worst case scenario's for remastering locks.
    Also, there's a possibility of absolutely no wait time if the data block
    you're trying to access is mastered by a node which is up and running.

Maybe you are looking for