Consolidation Time

All,
I need some assistance. Our HFM application has grown over the year but not drastically. We have added EST, BUD and ACT data but our consolidation is based on our LEGAL hierarchy and we have many Parent/Child relationships. We also have children that are shared as well as if you post something to a child at a base level, the hierarchy structure is set up to go through the whole tree structure to roll the information to the top of the house.
Our consolidation time has grown from one hour to approx 6 hours now... is this normal and if not what can I do to troubleshoot some basic things.
Also not the text version, but can someone truly explain the difference between Consolidate and Consolidate all with data and when should one be used?

There are many factors that could impact the consolidation time. Assuming your underlying IT infrastructure is good for HFM, metadata & rule are good place to start the performance tuning.
** Warning: think twice before really undergoing a performance tuning exercise. It could take you many good weeks in testing (calculation result/report impacts/...) and work back out why the author (if it is not you) do such setup the way he/she did. And expect debugging the original design along the way. **
You may first isolate which routine takes most of the time:
calculate() - Use "Force Calculate" on different value member
some improvement opportunity: use HFM's hierarchy rollup instead of HS.exp for add things up.
consolidate() - Use "Force Calculate Contribution" which also include calculate() for [Proportion] & [Elimination]
And observe HFM's consolidation progress bar to see "Calculating..." takes more time or "Consolidating..." takes more time. Logging custom rule message to a file to detail time consumed is your friend.
Of course, every setup differs and could be very unique. Just a little share of what I did before.

Similar Messages

  • Consolidation Time in Version 9.3.3

    Has anyone experienced substantially longer consolidation times in version 9.3.3 after moving from 9.3.1? Our daily consolidation has increased from 60min to 105min after putting in the new version. The only thing we changed from the prior month other than the version is locking of data. Would this impact consolidation significantly?
    Thanks.

    Locking the Entities in a scenario should reduce consolidation time as it would prevent any calculations from being performed on the locked items. Are you performing the same type of consolidation (Impact/All with data/All)? You could also review any scenarios which you may be impacting i.e. using hs.impactstatus.
    JTF

  • Media Consolidation time

    It's time for some media consolidation.
    I have a 13 part series shot on DV. I have a master Project with all bins and sequences, graphics etc. We have shot about a hundred reels so far (I anticipate at least another 20) I have 1.5tb of storage.
    I'm about half way through and shooting is still going on.
    I don't want to run out of storage space and am concious that I have lots of footage that hasn't made into the cuts that I need to get rid of.
    It is a property restoration show that follows projects over as much as a year so I will need to use shots from the first day in the last programme etc so I can't simply use the media manager to chuck away media used in a sequence because I might have used it in another episode.
    So how do I simply chuck away the shots that I haven't used?
    I had the foresight to give each shot a real name in the clip name (RN23 exteriors) so tracking shots down is easier.
    I'll be frank - the media manager has me stumped -I've read Larry Jordans guise, the FCP manual, a couple of other things I've seen and still I have zero confidence.
    Any thoughts, anyone?
    as ever - Muchos Gracias
    Lex

    I don't find there to be anything mysterious about Media Manager. It's consistently worthless.
    <
    You won't get anything but a hearty endorsement of that statement from me but, just to waste some bandwidth:
    Posited: Media management is a fundamental operation of a competent NLE. Does this mean that FCP is fundamentally incompetent?
    Evidence: The language used at the interface and the operating presentation of Apple's MM suggests it was created by non-Macintosh people who knew nothing about video for a non-video application and then (miserably) shoehorned into FCP.
    We hear endless complaints about Media Manager.
    We hear no positive things about it, none.
    And yet Media Manager appears to function flawlessly at the big houses where FCP is the video workhorse. Why isn't there more and louder squealing from the major players in the business? Paranoia, but I think there IS another version of FCP.
    bogiesan

  • Consolidation taking time for Specific POV

    When users are running Consol for the following Entity Structure with Entity A and Contribution Total in POV
    - Entity A
    Entity B
    Entity C
    Entity D
    (Here B is Child of A and C is child of B and D is child of C)
    It takes around 16 min
    however when POV is changed to Entity B and Contribution Total
    It takes 4 min . Can anyone let me know why there can be such huge difference in consol timing with change in POV.
    Edited by: Mugdha Shidhore on May 24, 2012 8:18 PM
    Edited by: Mugdha Shidhore on May 24, 2012 8:23 PM

    The first thing to understand is that running a consolidation on the Contribution Total member not only consolidates data to that entity it also consolidates data to the <Entity Currency> member of the parent of that entity which means that the siblings of that entity would also be consolidated. In your example, by selecting Contribution Total on Entity A you would consolidate all data to the parent of Entity A include any siblings of Entity A. You have not indicated if Entity A has any siblings.
    The difference in your consolidation time could, therefore, be explained by the parent of Entity A having a much larger number of descendants than the number of descendants of just Entity A.
    If you only wanted to consolidate data only up to Entity A then you should choose <Entity Currency> or <Entity Curr Total> of Entity A. That should give you a clearer picture of the difference in consolidation times.
    There are also other possibilities such as rules that only run on certain entities which could also be related.
    Brian Maguire

  • Consolidator Module question

    I've run into an issue with URL monitoring in SCOM 2012 SP1 and discovered that the problem lies in the Consolidator Module. I found the following 2 articles that led me to the solution.
    http://marcusoh.blogspot.com/2010/07/scom-overloading-consolidation-module.html
    http://social.technet.microsoft.com/wiki/contents/articles/20301.how-to-add-consolidation-for-url-monitoring-in-scom-20072012.aspx
    The fix was to add <StoreState>false</StoreState> to the Consolidator section in the monitor. It works like a charm!
    However, I need to understand what this StoreState actually does. Does anyone have a description of what this does? There is nothing anywhere in TechNet...
    Thanks in advance!

    Hi 
     <StoreState>true</StoreState>
    store sample output in consolidator module and consolidation module can store 128K of data. if data is collected is more than 128k casing below error in event log.
    The Microsoft Operations Manager Condolidator Module failed to save the state after processing and might loose data.
    Error: 0x80070057
    One or more workflows were affected by this.
    to overcome this issue  by reducing consolidation
    time or increasing sample time or you can set <StoreState>false</StoreState>
    <StoreState>false</StoreState>
    do not store the internal state on sample run.
    Regards
    sridhar v

  • The HFM Consolidation Mystery

    Hi,
    I am looking to optimise the rules files that is being used for consolidation. I would like to have a clear understanding of how the consolidation exactly happens, thwe concept of subcubes, performace best practices, use of virtual memory and physical memory, detailed consolidation log files etc. Do we have some documentation that throws light on these topics?
    I have referred the admin guide and it seems to explain the logical steps involved during consolidation and not the more technical details like memory utilization and the concept of sub cubes. Any help would be greatly appreciated.
    Cheers!!!
    Lijoy

    Kelly,
    Thanks for the help. But training does not seem an option as of now.
    Just a few questions.
    Does the consolidation time include time needed to read/write the data into the dataBase. Is this value considerable compared to the total time taken for consolidation?
    I have tried to implement certain timers onto the rule files to check the exact time taken for consolidating each POV. However, the total summation of times shown by the timers is merely half of the total consolidation time. What are the other process that may be taking the extra amount of time?
    I would like to get a knowhow on where the remaining time is being used?
    Regards,
    Lijoy

  • During consolidation cpu goes 100%

    Hi,
    During consolidation CPU is going 100%.
    can you suggest me some performance tuning tips to improve the performance.
    HFM 9.3.1.2 Build.

    If you are concerned about the processor utilization AND you cannot simply upgrade your hardware AND consolidation time is not an issue for you, then the following registry changes may be for you :
    Key Tree,     Key Name,     Key Type,     Min Value,     Max Value,     Default Value,     Unit of Measure,     Description
    HKEY_LOCAL_MACHINE\SOFTWARE\Hyperion Solutions\Hyperion Financial Management\Server\Running Tasks,     MaxNumConcurrentConsolidations,     REG_SZ,     1,     8,          Consolidations,     Controls the nubmer of concurrent consolidations allowed per application server. Any consolidations executed above the value are queued as Scheduled Consolidations.
    HKEY_LOCAL_MACHINE\SOFTWARE\Hyperion Solutions\Hyperion Financial Management\Server\Running Tasks,     NumConsolidationThreads,     REG_SZ,     1,     8,          Threads,     Controsl the multi-threading of consolidatiosn per application server. Lowering the value limits the system's utilization of system resources, resulting in slower consolidation performance.
    HKEY_LOCAL_MACHINE\SOFTWARE\Hyperion Solutions\Hyperion Financial Management\Server\Running Tasks,     NumConsolidationsAllowed,     REG_SZ,     1,     20,     8,     Consolidations,     Controls the number of conslidations allower per application across all the application servers.
    Edited by: beyerch on Feb 17, 2009 1:22 PM

  • Consolidation Status

    Hi,
    We have an application in HFM (Ver. 9.3) which is completely consolidated. We copied the application using the copy application utility and created a new application.
    When we see the consolidation status in new application, the highest level shows "OK". Where as the lower levels shows CN and some show CN ND. Isnt it that all should show only OK as it was copied from an application which was completely consolidated??
    Please let me know.
    Thanks,
    Sathish
    Edited by: user6389045 on Oct 1, 2008 6:50 PM
    Edited by: user6389045 on Oct 1, 2008 6:52 PM

    Hi,
    Thanks for your reply. But my understanding is that, when we copy a application which is completely consolidated, the new application is a replica of it including the consolidated data.
    Isn't it that one of the logical reasons for copying a app to another one is to save the consolidation time for new app??
    Please let me know.
    Thanks,
    Sathish

  • Intermittent Slow consolidations

    Hi All,
    We are currently facing some intermittent consolidation timings within our application.
    When we consolidate the Top Parent Entity ( Our application has around 20 entity (alternate) hierarchies with around 1000 base entities) the usual consolidation time is around 30 -40 minutes. However we have seen that at times when we consolidate the same period the consolidation takes about 1 hour and 30 minutes to about 2 hours. We loaded a blank rule file and consolidated and noted that on first consolidation the time taken was around 1 hour and 20 minutes, when we ran 2 more consolidation for the same period at the top level entity, using Consolidate all with data the consolidation completed in 15 minutes. We then loaded the actual rule file and on consolidating the first time we noticed that the time taken was about 1 hour and 30 minutes, however 2 subsequent consolidations on the same period finished in about 30 minutes. We have reviewed the rules multiple times and adjusted situations that could cause potential issues, however we still see this issue come up intermittently. Any insight in this regard would be of great help.
    Thanks,
    AP

    Thanks JTF for the quick reply!
    Since we were testing this problem, we did not have other users logged in and this was the only other consolidation running at the given time. We do have impact status rules in the system, but they are written only to execute on the last period of the fiscal to throw impact on the first period of the subsequent year ( Also, this is build with some conditions so it does not execute under all circumstances). We were consolidating the first period of the fiscal. Also, within rules, we tried adding timer code to see which parts take more time than others. We do not have Sub Consolidate in use as we are using default consolidation. So the 2 large sections with the rules, Sub Calculate and Sub translate yield to around 16 to 18 minutes when consolidating the top parent for a single period, when the overall consolidation would have taken either 1 hour 30 minutes or 40 minutes.
    Thanks for your help!
    Regards,
    AP

  • Adding New Dimension In An Existing Cube (Block)

    Hi fellows,
    Im not an expert so need you feedback.
    I am supporting a cube that was built a decade ago using Essbase 6.5. It has 7 dimensions (3 dense, 4 sparse). With the new reporting requirement, I need to add a new dimension to be called "BaseTimePeriod" that seems like a Time dimension but will show the "Constant Dollar" amount calcuation for difference Scenarios.
    Since the cube has been there for so long, and the expert that has pioneered that cube is no longer in the company. All of our reports uses API being called in MS Excel to retrieve data from the cube and format it the way the users wanted it to be. As well as 75% of our data are user input using the Essbase Excel Add-in, "lock/send" commands.
    My problem now is that, since I need to introduce another dimension for the "Constant Dollar" calculation, is there any way that if I add a new dimension the existing process of uploading data by the users as well as retrieving data through Excel API will not be affected?
    I tried to add new dimension in the R&D server that we have when I try to retrieve historical data from the cube in the Excel since there is "BaseTimePeriod" dimension before, the Excel report layout gets affected and inserts a new row placing "BaseTimePeriod" in that cell since this not yet there before. And when I try to upload data through Excel Add-in "lock/send" action, it did allow me unless I put a member from the "BaseTimePeriod" to complete the dimension member combination.
    My questions now are:
    1) How can I add a new dimension without impact from the historical data?
    2) Is there any way to place a default value for the new dimension for the historical data so it would complete the dimension member combination, so whenever I retrieve and upload data there will be no problem?
    Any insights related to this questions is highly appreciated. Thanks!
    obelisk

    If you add the new dimension to a cube that already has data, you should get a prompt when you save the outline to associate the existing data with a member from the new dimension. You will then go into a dense restructure which could take a while, depending on the size of your database. Another option would be to export your data in column format and load it into a relational table in something like SQL or Access, you can then add the new dimension to the export in the relational tool, export it back out to a text file and use a load rule to reload it.
    As far as the lock & send and existing reports, in most cases, adding a new dim will result in all of those needing to be updated. This is why we try not to add new dims to existing cubes, especially ones that have been around a while.
    Another thing to consider is the impact of adding a new dim. I don't know the size of your existing database, but you are going to increase it by adding a new dim and things like consolidation time and retrievals can be impacted, so you need to test all of this.
    You should evaluate why you need a new dim and determine if you can achieve the same result without adding a new dim. While a new dim might be the preference in a new cube design, when dealing with something that has been around a while you might want to be creative and somewhat more compromising if it is the lesser of two evils. If everyone is committed to a rebuild then go ahead and rebuild especially if it's been around a while, there are probably other things that can stand to be cleaned up as well, but if nobody wants to deal with that, then look to see if you can get their without adding a new dim, perhaps a new roll up in a existing dim or an attribute dim. A more thorough explanation of what the business case is would help to provide an alternate solution.

  • HFM Performance and Number of JV Line Items

    How much of consolidation time and SmartView response time is tied to number of
    JV line items?
    For example, one year we had 1,200 total JVs with 164,000 line items. If we
    could cut this by 25% or so, would there be great benefit in performance?
    Thanks

    Please reply...

  • HFM Assessment

    Hi guys,
    I think this is a silly question, but for me it's a challenge. I'm a HFM implementation resource, so I usually follow the project plan and build the application according to the client's needs. However, this time my boss has asked me to do an assessment for a client who has already HFM up and working. I am supposed to understand the business, the application and recommend some improvements... For a technical resource like me, it's kind of difficult.
    So here's my question:
    How would you start the assessment? What are the suggestions you can make for a working (good) application? More calculations (I'm afraid it could impact the performance of the application)? Ratio Analysis (this is more related to business)?
    I know it sounds that I need some self-help group, but trust me... For me, it's a huge challenge!
    Thank you all for your suggestions.
    Lu

    Hey Lu,
    Sounds like you'll want to review the infrastructure (which should be easy for you) as well as the application design. I think the first thing you'll want to do is to ensure you're meeting with the right people, those who have a stake in the application and processes. If they have a good administrator, he/she might be a place to start. Next thing is to ensure you're asking the right questions. There are plenty, but essentially you'll want to identify current pain points.
    Is the infrastructure physical or virtual?
    Is the infrastructure up to date?
    How many environments are set up (prod/qa/dev)?
    Is moving from prod to dev and vice versa easy?
    Is there a disaster recovery plan and has it been tested?
    Is the IT department capable of handling issues?
    Are there any pressing IT issues?
    Are there any upcoming initiatives that would impact HFM/tools (rollout of Office 2010)?
    Do end users have a process to report issues, especially during critical times like close?
    What version of the application are they on?
    Is there a reason to update (new features, 64bit performance)?
    What inputs are there to HFM (FDM, ERPi)?
    How does the data flow from source systems?
    What outputs are there and for what functional groups (who are the customers, how is data extracted, is it meeting their needs)?
    How is training and could it be better?
    Are the end users comfortable with their processes (SmartView, FR)?
    Is there a power user group and routine calls?
    What are the existing major pain points?
    Is the consolidation slow?
    Could metadata be cleaned up?
    If designing the app today, what would be done differently?
    How were the custom dimensions utilized?
    Is there too much data in the consolidation system?
    Could additional data be added (tax, cash flow, by product/customer)?
    Are all high level reporting requirements being met?
    Do eliminations and currency translation currently work properly?
    Are they taking advantage of all the app has to offer (tasklists, process management, batch reporting, topside journals, i/c matching, etc)?
    Would any new hierarchies be of benefit? Could some be removed? Would there be benefit to rebuilding the app and cleaning some things up?
    I'm sure you could go on and on for days with questions like these. As for the rules file that does the calculations, you'll want to just dig in and understand it (it's a vba file and the admin can provide you an extract or help you with it). There may or may not be ways to drive some efficiencies and decrease consolidation times. There are some third-party tools to help with this, but taking some time and walking through it in entirety works very well.
    Good luck!

  • Estimatin

    Hi MDM Experts
    I would like to know how MDM objects are estimated,
    Suppose I have the following objects to be build...
    1. Moduling Repository
    2. Set up Validations
    3. Set up Multi level Workflow approval process
    4. Build Searches
    5. Setup for MDIS
    6. Setup for MDSS
    What is the estimation process generally practiced.
    Regards

    HI Vickey!
    Following are the factors Influencing Timelines of MDM Implementation,one can go to granular level and estimate time requirements of each of the individual processes.
    DATA MODELING
    SAP provides standard repository & maps for Vendors, Customers, Materials, BP and Article. These repositories contain standard data elements. Z elements/tables defined by the client have to be added to the repository.
    Identifying the complete list of Z fields/tables to be included in MDM and modeling the elements in the repository can be a time-consuming & complicated activity. Further, any modification to the repository will lead to modification of standard import maps, standard syndication maps and the standard XI configuration being used in the scenario.
    COUNT OF MASTER RECORDS
    The count of master records has an effect on cut-over data import time and consolidation time. If u2018consolidated master datau2019 forms a part of any project deliverable, then the number of master records will have a significant impact on the project timelines.
    COUNT OF SOURCE SYSTEMS
    Extracting data from source systems requires the following deliverables:
    u2022Creating/Using an extraction program in source system.
    u2022Configuration in XI.
    u2022Import Maps and Ports in MDM.
    COUNT OF DESTINATION ITEMS
    Pushing data to destination system requires following deliverables:
    u2022Syndication Maps and Ports in MDM
    u2022Configuration in XI
    SAP provides standard XI configuration and standard syndication maps for SAP R/3. For non-SAP destination systems, some or all of these deliverables might have to be created and configured.
    Does data enrichment scenario exist?
    Some scenarios might demand data enrichment using 3rd party application. Identify those scenarios and determine whether direct Import of data is possible. If not, custom Java development or configuring of u2018MDM Data Enrichment Adapteru2019 might be required.
    Does extension scenario exist?
    Extension Scenarios are common in u2018MDM for Materialsu2019 implementation. Such requirement demands modification to u2018Qualified Tablesu2019 using MDM Java or ABAP APIu2019s.
    WORKFLOW CREATION TOOL
    Workflows in MDM can be created using either of the below mentioned technologies:
    u2022MDM WORKFLOWS: MDM Workflows reduce the development time; have limited functionality and cannot be used in complicated scenarios.
    u2022GUIDED PROCEDURES: Guided Procedures enable creation of workflows that can call processes/objects from multiple systems. Custom functionality can be achieved by calling Java code from Guided Procedures.
    u2022BUSINESS WORKFLOWS: Business Workflow is a proven technology for implementing workflows in SAP. Exposing and calling these workflows from MDM demands integrating Business Workflow with UWL. Development of these workflows has to be done in SAP system.
    MASTER DATA Vs TRANSACTIONAL DATA
    In some scenarios, the client might find it convenient to maintain transactional data in MDM repository. However, though such a requirement can be mapped, may lead to major performance bottleneck due to the constant import/modification & syndication required to keep the transactional data updated.
    FINALIZING REPOSITORY STRUCTURE
    Ensure that the repository structure is finalized at the end of prototyping phase. The finalized data model should be well documented and Signed-Off by the client. Any change to the structure there after will result in linear increase in the implementation time of the project. This is because; any modification to the structure might result in modification to all or some of following components:
    u2022Repository structure
    u2022All Import maps created on the relevant table/field
    u2022All syndication maps created on the relevant table/field
    u2022All Validations/Assignments created on the relevant field
    The above also applies to change in display fields. Hence, the display fields for a table should also be finalized in the prototyping phase.
    Acceptance of the New System by End-Users
    Data modeling in MDM has a major impact on how users create new masters records in MDM. At times, the repository may have been designed to optimum use of Qualified tables and other similar feature-rich tables in MDM. However, if the End-User finds it difficult to create data as per the repository design, the whole purpose of implementing MDM might be defeated.
    SIZING / BANDWIDTH
    Server Sizing is of utmost importance in MDM implementation project. MDM server loads a sizable amount of data in-cache to enable various search functionalities and to minimize the probability of disk accesses. This demands availability of enough memory (RAM). Also, when importing data into repository through MDM Import server, several resource intensive steps like parsing, transformation, database write etc. are performed. Efficient & speedy import demands proper server sizing. Further, ensure that the recommended bandwidth is available.

  • How to capture HFM statistics

    Hi,
    I would like to capture the following stats from HFM:
    - Consolidation time
    - Database Growth
    - Active Users which use the application
    (we may have more users which have access to the application compred to the actual number of user who use the application)
    Can anyone help on how to capture the above stats. If the information exists in any table in the HFM application,
    what table and how to query them?
    Thanks.

    The easiest way I always used is to look to transaction sm50 and display the CPU usage of each process (click on one of the little toolbar icons). The dispatcher always tries to send user requests to the first process. If that process is busy, it tries the second process ... and so on. For that reason the first dialog process has always the highest CPU usage and the last one (or multiple ones) the lowest usage or zero CPU usage. The highest ranking dialog process with some CPU usage indicates the peak process use since system start-up. The same holds for any other process type. Hope this helps.
    - Joerg

  • Affect of Journals to performance

    We have a rather large application and currently there are numerous journal entries. I am fairly certain the Journals affect overall performance of the application. The subcubes are based on Entity, Year, Scenario, and Value dimensions, so therefore you would have more cubes to contend with. I am suggesting to the users to limit the amount of journals in the application and they will see better performance in consolidation times, but I have not located any documentation to that fact.

    We have a rather large application and currently there are numerous journal entries. I am fairly certain the Journals affect overall performance of the application. The subcubes are based on Entity, Year, Scenario, and Value dimensions, so therefore you would have more cubes to contend with. I am suggesting to the users to limit the amount of journals in the application and they will see better performance in consolidation times, but I have not located any documentation to that fact.

Maybe you are looking for

  • [Help] Using a class at the other class

    hi guys, i'm still new to java. So that i tried to learn by myself. i got 2 class: 'Person.java' and 'Ex1.java' basically, Person just a class for making an object of Person. and the main function is made at Ex1.java. my problem is: i made 2 Person o

  • My computer has died and I've had to by a new one. I can no longer access my Creative Cloud serial numbers?

    Hello My computer has died and I've had to by a new one. I have a creative cloud account registered under my email <Removed by Moderator> for both Photoshop CC and Lightroom 5 which I pay for via Direct Debit. I no longer have the Creative Cloud app

  • Need easy instructions on creating a 2 computer network

    Hi! Recently I aquired a second Mac, a MacBook Pro. I'd like to set up a mini network between my 2 computers. The MBP is wireless connected to the Internet and the stationary through Ethernet cable to the Internet, directly to the broadband modem. It

  • How to change user status

    Hi I want to change the user status from  X - > Y - > Z  - > X based on config... so in doing that my system status would be set to released. I am doing the updation each time from X--> Y and calling  crm_order_maintain, crm_order_save, commit work a

  • Cloning 11.1.0.6 and attach in 11.1.0.7

    hi, we have two environments, 11.1.0.6 and 11.1.0.7 in windows 2003 in separate machines we are planning to move 11.1.0.6 databases to 11.1.0.7. is it possible to clone 11.1.0.6 database by using the method, backup control file to trace/cold backup a