Parallel Calc Issue

We are currently running 6.5.1 on our NT server and have encountered inconsistencies in our aggregations when using the parallelcalc settings. We have two processors and are trying to use both of them in the calc. Has anyone run into a similar problem? Any ideas would be appreciated.

Rich, I agree, we are using 6.5.3, and parallel calc is kicking butt. When I originally benchmarked parallel calc with 6.5.1, I was disappointed with the results. In all cases, serial calcs were running faster.<br><br>As far as Hyperion's recommendation that you leave one processor free for the os and for thread distribution, I suggest that users test different configurations. We have found that using all four processors on our 4-way boxes yield the best calc times, four processors are close to 20% faster on average than just 3 processors.<br><br>Jeff McAhren<br>Dallas, Texas

Similar Messages

  • Parallel Caching Issue

    We have found issues with the parallel loading of the OLAP cache using reporting agent jobs where entries are not populated correctly.We rely on the OLAP cache for delivering the very high levels of concurrency during the peak times.
    Once the main batch data loading has been completed we run some background reporting agent jobs to pre-populate the OLAP cach.Each job contains a web application template that holds 1 or more queries and will process 1500+ stores for each run.
    We have different reporting agent jobs for the different web application templates, but we have discovered that if we run these jobs in parallel we do not get the full benefit of the OLAP cache compared to if we run them sequentially.
    If we run the jobs in parallel, when we look in RSRCACHE for these queries it would appear to have populated correctly, but when we check RSDDSTATS for the query performance the following day, we can see that a large number of the
    stores still hit the database and did not benefit from the cache entries. Sometime this can be as much as 60% failure to hit the cache.
    If we run the same job sequentially, then check RSDDSTATS the following
    day, we can see 100% success rate with every store hitting the OLAP cache.
    Is anyone able to advise how we can resolve this parallel caching issue?

    Hi Ganesh,
    I am currently having similar trouble with TBB1 where I receive error message below:
    TRL intialization for MM, FX, Derivatives, co. code WTRD, valn area 003 is not yet complete
    Message no. TPM_TRL052
    Are you familiar with this issue? any help would be greatly appreciated! i hope that you can help..

  • Dynamic Calc Issue - CalcLockBlock or Data Cache Setting

    We recently started seeing an issue with a Dynamic scenario member in our UAT and DEV environments. When we tried to reference the scenario member in Financial Reports, we get the following error:
    Error executing query: The data form grid is invalid. Verify that all members selected are in Essbase. Check log for details.com.hyperion.planning.HspException
    In SmartView, if I try to reference that scenario member, I get the following:
    The dynamic calc processor cannot allocare more than [10] blocks from the heap. Either the CalcLockBlock setting is too low or the data cahce size setting is too low.
    The Dynamic calcs worked fine in both environments up until recently, and no changes were made to Essbase, so I am not sure why them stopped working.
    I tried to set the CalcLockBlock settings in the Essbase.cfg file, and increases the data cache size. When I increases the CalcLockBlock settings, I would get the same error.
    When I increased the data cache size, it would just sit there and load and load in Financial Reporting and wouldn't show the report. In SmartView, it would give me an error that it had timed out and to try to increase the NetRetry and NetDelay values.

    Thanks for the responses guys.
    NN:
    I tried to double the Index Cache Setting and the Data Cache setting, but it appears when I do that, it crashes my Essbase app. I also tried adding the DYNCALCCACHEMAXSIZE and QRYGOVEXECTIME to essbase.cfg (without the Cache settings since it is crashing), and still no luck.
    John:
    I had already had those values set on my client machine, I tried to set them on the server as well, but no luck.
    The exact error message I get after increasing the Cache settings is"Essbase Error (1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the olap.server.netConnectTry and/or olap.server.netDealy values in teh essbase. proprieties. Restart the server and try again"
    From the app's essbase log:
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1023040)
    msg from remote site [[Wed Jun 06 10:07:44 2012]CCM6000SR-HUESS/PropBud/NOIStmt/admin/Error(1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the NetRetryCount and/or NetDelay values in the ESSBASE.CFG file. Update this file on both client and server. Restart the client and try again.]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1200467)
    Error executing formula for [Resident Days for CCPRD NOI]: status code [1042017] in function [@_XREF]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Warning(1080014)
    Transaction [ 0x10013( 0x4fcf63b8.0xcaa30 ) ] aborted due to status [1042017].
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1013091)
    Received Command [Process local xref/xwrite request] from user [admin@Native Directory]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008108)
    Essbase Internal Logic Error [1060]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008106)
    Exception error log [E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp] is being created...
    [Wed Jun 06 10:07:46 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008107)
    Exception error log completed E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp please contact technical support and provide them with this file
    [Wed Jun 06 10:07:46 2012]Local/PropBud///4340/Info(1002089)
    RECEIVED ABNORMAL SHUTDOWN COMMAND - APPLICATION TERMINATING

  • Parallel sqlldr issue with 10gR2

    I have devised a cool process for loading very large flat files into an Oracle database using a multi threaded parallel sqlldr process.
    Background: I could not use direct path loads due to a one or two issues that simply eliminated it as an option. Using SKIP and LOAD, I launch 5 distinct sqlldr jobs that each load a different portion of the same file. This exceeded my expectations and in some instances I rivaled direct path load times.
    This worked very well in 9iR2 but when upgrading to 10gR2 in both Solaris and AIX, sqlldr could no longer compute correctly using SKIP values greater than 0 and LOAD values greater than 0.
    Any thoughts?

    interesting ...
    you sound like an expert w/ Oracle 9.2 sqlldr, no thoughts to remedy your specific issue.
    10g is a "brand-new product" the data pump enhancements are so pervasive you may want to take a look if unable to "migrate your 9.2 approach to 10g. expdb, impdb.
    ie. support for "fine grained" object selection, CONTENT, INCLUDE, EXCLUDE,
    Parallelism, external tables, append to populated tables, etc.
    I have devised a cool process for loading very large
    flat files into an Oracle database using a multi
    threaded parallel sqlldr process.
    Background: I could not use direct path loads due to
    a one or two issues that simply eliminated it as an
    option. Using SKIP and LOAD, I launch 5 distinct
    sqlldr jobs that each load a different portion of the
    same file. This exceeded my expectations and in some
    instances I rivaled direct path load times.
    This worked very well in 9iR2 but when upgrading to
    10gR2 in both Solaris and AIX, sqlldr could no longer
    compute correctly using SKIP values greater than 0
    and LOAD values greater than 0.
    Any thoughts?

  • Parallel Valuation Issue

    Hi Experts,
    In TRM, I have configured three valuation area. 001 for Operative Valuation and 002 and 003 are for parallel valuation. After some time, I was not requiring Parallel valuation 003. therefore I have deleted the same. while doing my TBB1 posting,  The deletion resulted in Error message like a short dump and it was getting logged out from TBB1
    To fix that, I have went to Initialize Parallel valuation step, Executed Fast Entry for both Parallel Valuation 003, then i was able to post my transaction in TBB1, But i am finding the entries are coming only for Valuation Area 001 which is Operative Valuation and not coming for Valuation Area 002 Parallel Valuation.
    I appreciate any kind of guidance or direction from you on my issue
    Ganesh

    Hi Ganesh,
    I am currently having similar trouble with TBB1 where I receive error message below:
    TRL intialization for MM, FX, Derivatives, co. code WTRD, valn area 003 is not yet complete
    Message no. TPM_TRL052
    Are you familiar with this issue? any help would be greatly appreciated! i hope that you can help..

  • Parallel 6 issues with some appliacations

    I install the trial P6. Fits of all it doesn't work faster, #2 some of applications, like iClone 4 crashes, Crazy talk display is blurry. I allocated more memory (2GB) nothing helped.
    Did anyone had a similar issues?
    Thanks.

    That is a great idea, to contact parallel support, if it was that easy. Unfortunately they do not respond to a person, who downloaded just the trial, secondly if I do purchase their software, than they charge for their support. Paying for the software, paying for the support, wasting time, and finding out that it doesn't work...
    I better of go to the Apple support group, hoping to find an answer, not a suggestion.

  • Mountain Lion parallel desktop issue?

    bascially i updated to the os x mountain lion today and instead of boot camping to use windows i wanted to quick access w/o resetting the macbook so i tried downloading and installing parallel desktop free trial... however it does not let me use it on mountain lion anyone got these issue fixed yet? hit me up with responds

    I'm using version 7.0.15104.778994 and having no trouble under Mountain Lion. I think that it was a July 10th release (coinciding with the GM developer release of ML).
    What version do you have?
    Clinton

  • Parallels Explorer Issue

    I have an IMac that's 1 week old.
    I installed Parallels Desktop 3.0 Premium Edition for Mac successfully according to the installation program.
    When I try and run Parallels Explorer I get the following error message:
    The application Parallels Desktop quit unexpectedly
    I have gone to their website and found an article (id 4790) that seemed like it may be related to my issue, but the results are still the same.
    i have asked for assistance and am waiting for a reply.
    I even downloaded build 5608 and still getting same problem.
    Can anyone help?

    Thanks for the link.
    I must have been doing something out of sequence.
    When I installed the new build again it worked.

  • Parallel processing issues

    First of all, I dont know if the problem I am having is suitable for this forum group as I cannot find a better forum group to posted on.
    I have written a program which it has a web part and a backend part. As soon as the program is deployed onto and oc4j, it will starts retrieving data from the database and perform it task at the backend part. The web will allow me to check on some basic info about application and retrieve log from the log directory.
    As long as I deployed onto a single non-clustered server with only 1 oc4j instance, I am all good, but whenever I have a clustered server environment consists of 4 hosts with the following configurations, I am toasted.
         Each host will have its own oc4j instance with a copy of the program deployed onto.
         2 of the 4 oc4j will be turn on immediately and the other 2 oc4j will serves as backup. Whenever the 2 active oc4j shuts down, the backup oc4j will become active and take over the job.
         The 2 active oc4j will be running the program in parallel for sharing the work load.
    The problem arises when the 2 active copies of the program are running in parallel. Since the program will retrieve a transaction id from the db, and then perform some necessary action and save a new id number that is larger than the retrieved id back to the db. These actions from the program will be performed repeatedly with 1 minute time interval between them. If 2 copies of the program are running in different oc4j instance in parallel, then there is a chance that both copies of the program will retrieve the same transaction id at the beginning which this should be strictly forbidden due to some business logic issues.
    So my question is this, how should the program be designed so the 2 copies of the program reside on 2 different oc4j can have some mutual agreement and knowing that they are working on some duplicate transaction id. I will need to have an exact copy of the program deployed onto each oc4j instances.

    Hello,
    you need to lock the database record which contains the session id you are working on. Use a select for update statement. If one application has a lock on the record, than the second one will wait until the lock is released (the transaction is commited or rolled back) before his select for update statement will return the record. Or you can specify to skip the locked records, so the second application won't get blocked. Anyway only one application will be able to read the record in question.
    You can see examples here:
    [http://www.techonthenet.com/oracle/cursors/for_update.php]
    Zsom

  • Enabling Parallel Calc (CALCTASKDIMS)

    Hi,I'm looking for information related to the CALCTASKDIMS function. The issue is to identify additional tasks for parallel calculation. As I rode it in DBA guide, Essbase uses the last spare dimension in an outline to identify tasks that can be performed concurrently. But how to know if I need to specify calctaskdms settings a second or a third sparse dimension???Thanks for your helpRegards,Sébastien ROUX

    I saw this in DBAGThe 50% empty task ratio pointed out probably is a good goal of tuning with CALCTASKDIMS ? Use this configuration setting only if your outline generates many empty tasks, thus reducing opportunities for parallel calculation. See the Essbase Database Administrator's Guide for more information about what kind of outlines or calculation scripts generate many empty tasks. ? Essbase writes a message to the application log specifying the number of tasksthat can be executed concurrently at any one time (based on the data, not thevalue of CALCPARALLEL or SETCALCPARALLEL):Calculation task schedule [576,35,14,3,2,1]This example message indicates that 576 tasks can be executed concurrently.After these tasks complete, 35 more can be performed concurrently, and so on.The benefits of parallel calculation is greatest in the first few steps, and thenthe benefits taper off because there are fewer and fewer tasks being performedconcurrently.? Essbase writes a message to the application log indicating how many of thetasks are empty (contain no calculations):[Tue Nov 27 12:30:44 2001]Local/CCDemo/Finance/essexer/Info(1012681) Empty tasks [91,1,0,0]In the example log message above, Essbase indicates that 91 of the tasks atlevel zero were empty.If the ratio of empty tasks to the tasks specified in the task schedule is greaterthan 50, then parallelism may not be giving you improved performance,because of the high sparsity in the data model.Steven [email protected]

  • Parallel calc script execution

    Hi All,
    We are having HP UX box with 4 processor and have 15 calc scripts. For parallel execution of these scripts We have set Calcparallel as 3 in .cfg file. My query is if these scripts are fired at the same time, will they going to execute in a seq. manner or in parallel fashion.
    We have 11 dim. in which 10 are sparse and 1 is dense in our outline. We have analysed the log but could not figure out anything.
    Any suggestions/guidance !!
    Regards
    -len

    To run three calc scripts simultaneously, you're going to need three people who kick it off in EAS/Excel/MaxL or more likely three separately yet simultaneously scheduled MaxL scripts that execute the calculations.
    From a data integrity (or just your sanity) perspective, it is good that the three calcs are addressing different parts of the database.
    However, from an Essbase data cache/disk controller perspective, the two are going to be spending a lot of time thrashing as Essbase tries to load blocks for all three calcs into and out of memory as the calculations proceed.
    You may find that these processes are faster when they are truly run one after the other.
    Of course the only way to really know is the benchmark both approaches and see what happens.
    Regards,
    Cameron Lackpour
    Edited by: CL on May 16, 2009 5:57 PM
    Whoops, three, not two.
    Edited by: CL on May 16, 2009 5:57 PM
    Whoops again, two referred to the data cache/disk controller, not the three calcs. I should not reread my posts and think that I can "improve" them.

  • Mac OSX Lion 10.7.4 and Parallels (memory issues)

    I bought a 2011 Macbook Pro i7 2.2ghz with 4gb of ram, 5400 rpm 500gb hard drive.  Base 15 inch model pretty much. 
    I actually purchased it for a few reasons.  1) I had a 1200 dollar credit at a local electronics store and 2) I figured the battery life would be decent and 3) I was interested in playing around with Mac OSX a bit more.
    Long story short I still need to use windows for various applications and for work.  So I purchased Parallels since it seemed to have pretty good reviews and installed Windows 7 in one and Windows XP in one.  Giving them 2gb for 7 and 1 gb for windows XP.  Now since I knew I wouldn't have enough ram, I upgraded my ram to 8gb and upgraded the hard drive to a 7200 rpm.
    So this is where my problem is, I am unable to run both of them at the same time without Mac running out of memory and swapping like no tomorrow bringing the entire computer to a halt.
    In general Lion seems to be pretty slow, even without any virtual machines running.  I have done a reinstall of the entire OS, repaired permissions etc.
    So at this point I think I have come to the conclusion that the only solution is to again get more memory and go up to 16 gb, which is crazy in my opinion.  I also installed Windows 7 in bootcamp and then installed Vmware workstation and I can run 4 virtuals (1 windows 7, 2xps and a win2k3) before the base OS gets really sluggish, so it isn't the hardware, it is simply the OS.
    Case and point is my other laptop a Lenovo W700ds (for anyone who has seen this thing, it is a beast and has horrible battery life.. one of the reasons I wanted the Mac) runs a core2quad and has 8gb of ram but I usually keep 3 virtuals up without any hit to Windows 7.  But it also only gets 30-45 minutes of battery life, while my Macbook Pro gets about 2-2.5 hours of battery life with the virtuals running however everything runs like crap so I usually have to suspend and only keep 1 at a time open.
    My question being simply, is this normal?  Should better hardware have such poor performance or am I just expecting to much?  At this point my options are to install Windows 7 native (which i'd rather not do, I actually do like some of Lions features) or am I forced to buy even more ram?
    I was also thinking of getting rid of the superdrive and moving my main drive there and then getting a 256gb SSD as the primary.  But it seems to me at this point I am just dumping money into something that should already do what I want.  This would split the hard drive access load though.
    Any suggestions? 
    Thanks

    Honestly I would never run both Win 7 & XP at the same time in a VM. 7 needs at least 4GB to run properly and uses more resources then XP.
    I personally use VMware Fusion and XP. At this time there is no software, that I'm aware of or use, that does not run on XP. So for my Mac Win VM purpose I only run XP with the software in need to use in Windows. This work very well for me. I have 2 CPUs and 3GBs of RAM assigned to XP. Leaving the other 2 CPU cores, 4 HT cores, and 5GBs of RAM for OS X.
    Not sure why you need, or want, to run both versions of Windows at the same time. I don't see the need as stated above.

  • Open Item Management and parallel accounting issue

    We are using parallel accounting, there are two ledgers setup.
    With this setup, posing to leading ledger will automatically post an entries to non-leading ledger.
    Our user have this requirement, that for adjustment purpose, they just want to post to one ledger (by specifying the leadger group).
    but we encounter a problem, when GL account setup with 'Open Item Management' , FBB1 dont allow to post it directly when we enter the ledger group.
    Can the expert advise, it is with parallel accounting, account with 'Open Item Management' have to post to all ledgers.
    Thank you.

    Yes, this is a precondition , if a particular posting needs to be made to non -leading leadger , then accounts should be managed as not open items .
    In case if the accounts is managed as open items then posting happens to all the ledgers.
    Thanks
    Regards,
    Manish Garg

  • Essbase Defrag and Calc Issue

    I have a calcscript will runs against a BSO cube with 11 dimensions and it runs for 15 mins. The script does some calculations and rollups (AGG and CALC DIMS etc). I can see the blocks go from 15k blocks to 500k blocks at the end.
    And then when I re-run the script, it runs for over 2 hours and of course it's due to the block explosions and all. And so, I do a defrag and clear upper blocks by doing a:
    CLEARBLOCK UPPER;
    CLEARBLOCK EMPTY;
    After this the number of blocks goes way down to 25k blocks.
    And still the calc continues to run over 2 hours. So my question is why this behavior? Is there anything I can do besides reloading the cube from scratch? I know if I do a complete clear it obviously runs the same timing again. But I want to be able to not have to do that. Is there a step I am missing?

    Here are some tips:
    - please review "intelligent calculation" - this feature helps when only a few blocks are changed to limit the recalculation to affected blocks
    - your index cache may be to low - index has grown with number of blocks beyond your index cache size -> more disk IO than your setup can handle
    - overall design needs tuning (block size / density / dynamic calculations / ...) -> contact a consultant
    - what is the business case for 10 min calc time limit? Will users wait for calculation to finish?
    Sascha.

  • BPC- Goodwill calc issue

    We have tagged 4 accounts as SHCAP in TYPELIM property of account dimension.We are using this SHCAP as source account in one of our business rules, the system is considering only 3 accounts out of the 4 and completly ignoring the amount uploaded in the fourth account. We have tried following to resolve the issue.
    1. We tried clearing the data of all four accounts and uploaded them again
    2. We tried creating a new ID in the account dimension for the fourth account and tagged with SHCAP in TYPELIM
    3. We tried changing the datasource of the fourth account
    Pl suggest.

    Hi,
    Was this property value entry done in the admin client, or in the NW backend? If it is the former, please check in RSA1 to make sure that the value is there as the front/back ends can sometimes be out of sync. Typically this is with 7.5 SP07/SP08....incidentally, what SP are you on?
    A few other things to do/check:
    - Is the 4th dimension member after an empty row in the dimension sheet? NW does not like empty rows
    - Is there any logic/business rules at all happening immediately before this calculation is attempting to take place? It could be that the data is not in the correct source location for the logic to work
    - Go into logic tester transaction (UJKT), enter a SELECT statement that looks for all accounts with the property value SHCAP, then when you execute (simulate), the right-hand side should show the list of 4 accounts. If it is in this list then I don't think it's a problem with the member/property
    Hope one of these helps.
    Tom.

Maybe you are looking for