ESSBASE Aggregation Issue.

Hi,
I am facing a serious problem with Essbase. Iam implementing Hyperion Planning 11.1.2.2 for one of our client and first time I am implementing this version.
The aggregation is not working in my setup. I have written a rule to aggregate the hierarchy. I have tried with AGG, CALC DIM, etc. But still the same issue.
I have also tried running the Calculate web form rule file, but still aggregation is not happening.
I have also noticed that in Planning Dimension maintenance, even the level 0 members showing the consolidation operation.
Any body has clue?
Please help me as I am unable to proceed further.
Thanks in Advance.
Regards,
Sunil.

It is probably worth testing your script as a calc script and then run it directly against the essbase database using EAS, then check the data with Smart View, this process should eliminate any issues in planning or calc manager.
If you are still having problems then post your script and I am sure somebody will give you some further advice.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Aggregation issue Bex report

    Hi all,
    I am facing the following aggregation issue at reporing level. BW system 3.5
    Cube1
    Material, Company code, Cost center, Material, Month,   Volume KF
    Cube2
    Material, Company code, Cost center, Material, Month,   Price KF
    Multi provider
    Material, Company code, Cost center, Material, Month,   Volume KF, Price KF
    Report
    - Global Calculated key figure 'Value' is  based on basic KF's  Volume KF, Price KF
    - Time of aggregation is set to " Before aggregation" in propery of Calculated Key Figure.
    -  There is only one characteristics 'Company code' is used in report.
    When, I execute this report, Calculated KF is not working (No values), If I change Time of aggregation " After aggregation" in propery of Calculated Key Figure, then It works but wrong values.Price gets aggregated(add ups) and multiply with Volume which is wrong.
    Can you please give me a Ideal solution to resolve this.
    Thanks,
    Harry

    Hi all,
    Can I assume that there is no solution for this issue ??
    Thanks,
    Harry

  • Aggregation issue for report with bw structure

    Hi,
    I am facing aggregation issue while grouping reports in webi.
    We have a BW query with 16 values, which we bring to bw as structure. Out of 16, 8 are percentage values (agg. type should be average).
    If we bring the data at site level, data is comming properly. But if we use same query and try sum/grouping( on region level), then percentage is getting added.
    Since it's a dashboard report with lots of filters, we cannot go for seperate query at each level(site, region, zone).
    How we can resolve this.. please give me suggestion.
    Regards
    Baby

    Hi,
    Since we were using structure, it was not possible to produce the required result in BO.
    We change structure to keyfigures and bring all of them in to BO. All the column formulas are now in BO side.
    Now it is working fine.
    Regards
    Baby
    Edited by: Baby on May 10, 2010 11:39 AM

  • Essbase performance issue

    Hi all,
    We encounter a Essbase perfromance issue that we don't know the root cause.
    We have configured a server to run Essbase with 8 core CPU and 16GB RAM. We found that the Essbase calculation can use up to 80% CPU and about 8GB RAM only. I also checked the IO rate at the same time but the disk loading is not very heavy. We just suspect that what kind of resource are waiting at Essbase calculation engine? It is not CPU bounded, memory bounded, and IO bounded.
    Do you think it can help if we keep the whole Essbase database (around 30GB) into RAM based disk drive can speed up the IO performance?
    Thanks if you have some ideas for us to investigate.
    Edited by: hyperion planning user on Jun 2, 2009 12:27 AM
    Edited by: hyperion planning user on Jun 2, 2009 12:36 AM

    I'm confused -- is it CPU bound or not?
    You write:
    We found that the Essbase calculation can use up to 80% CPU and about 8GB RAM only.Do you mean 80% of all eight of your CPUs? That sure sounds CPU-bound to me. In fact, I wish (within reason) that most of my Essbase calculations worked that way -- that would men that I have the disck caches tuned to their utmost efficiency.
    This means you're getting data from disk almost as fast as is possible.
    You're not going to be able to get everything into memory for two reasons:
    1) 30 GB of .IND and .PAG/.DAT files isn't going to fit into Essbase's addressable memory space. See: using RAM disk to speed up Essbase calculation and rollup
    2) Even when the database is nice and small and you can stick the whole thing in a cache, uncompressed, Essbase still is "smart" and will keep a portion of it on disk during calcs -- this doesn't make sense in isolation, but empirically, you can monitor disk usage during a supposedly database that is in theory total enclosed in the cache and see it getting hit. This may be related to Essbase's general housekeeping -- I don't know. In any case, this is generally not a real world case, unless you're running your business on my Very Favorite Database In The Whole Wide World -- Sample.Basic.
    Or are you saying that you will define a real (and it would help if you really could allocate real RAM, and not an OS-managed sort-of-RAM-sort-of-DASD situation) RAM drive and point Essbase there. That is sort of risky, isn't it? How will you flush it to real DASD for backup? Exports?
    Regards,
    Cameron Lackpour

  • Require Very Urgent Help on Aggregation Issue. Thanks in advance.

    Hi All,
    I am new to essbase.
    I have got an issue with aggregation in Essbase. I load data at zero level and then when I aggregate using CALC DIM i do not get any value.
    The zero level load being:
    Budget,Version,Levmbr(Entity,0),Levmbr(Accounts,0),NoRegion,NoLoc,NoMod,Year,Month.
    When I use default calc or give Calc Dim for the above no aggregation takes place at the parent level.
    Requirement :
    Values at Version,Region,Location,Model,Year,Month.Budget Level.
    Please advice.
    Thanks in advance.
    Bal
    Edited by: user11091956 on Mar 19, 2010 1:07 AM
    Edited by: user11091956 on Mar 19, 2010 1:10 AM

    Hi Bal,
    If you had loaded without an error , and after that your default calc is resulting in not aggregated values. Then I can imagine only one way, it cannot happend is through your outline consolidations.
    Check,if data which is loaded at the members does have IGNORE or ~ as the consolidation
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • Aggregation issue on a T5220

    I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s. As far as I know, no patches were applied to the server and no changes were made to the switch that it's connected to (Nortel Passport 8600 Series) and the total amount of backup data sent to the server has stayed fairly constant. I have tried setting up the aggregation multiple times and in multiple ways to no avail. (LACP enabled/disabled, different policies, etc.) I've also tried using different ports on the server and switch to rule out any faulty port problems. Our networking guys assure me that the aggregation is set up correctly on the switch side but I can get more details if needed.
    In order to attempt to better troubleshoot the problem, I run one of several network speed tools (nttcp, nepim, & iperf) as the "server" on the T5220, and I set up a spare X2100 as a "client". Both the server and client are connected to the same switch. The first set of tests with all three tools yields roughly 600 Mb/s. This seems a bit low to me, I seem to remember getting 700+ Mb/s prior to this "issue". When I run a second set of tests from two separate "client" X2100 servers, coming in on two different Gig ports on the T5220, each port also does ~600 Mb/s. I have also tried using crossover cables and I only get maybe a 50-75 Mb/s increase. After Googling Solaris network optimizations, I found that if I double tcp_max_buf to 2097152, and set tcp_xmit_hiwat & tcp_recv_hiwat to 524288, it bumps up the speed of a single Gig port to ~920 Mb/s. That's more like it!
    Unfortunately however, even with the TCP tweaks enabled, I still only get a little over 1 Gb/s through the two aggregated Gig ports. It seems as though the aggregation is only using one port, though MRTG graphs of the two switch ports do in fact show that they are both being utilized equally, essentially splitting the 1 Gb/s speed between
    the two ports.
    Problem with the server? switch? Aggregation software? All the above? At any rate, I seem to be missing something.. Any help regarding this issue would be greatly appreciated!
    Regards,
    Jim
    Output of several commands on the T5220:
    uname -a:
    SunOS oitbus1 5.10 Generic_137111-07 sun4v sparc SUNW,SPARC-Enterprise-T5220
    ifconfig -a (IP and broadcast hidden for security):
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    aggr1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet x.x.x.x netmask ffffff00 broadcast x.x.x.x
    ether 0:14:4f:ec:bc:1e
    dladm show-dev:
    e1000g0 link: unknown speed: 0 Mbps duplex: half
    e1000g1 link: unknown speed: 0 Mbps duplex: half
    e1000g2 link: up speed: 1000 Mbps duplex: full
    e1000g3 link: up speed: 1000 Mbps duplex: full
    dladm show-link:
    e1000g0 type: non-vlan mtu: 1500 device: e1000g0
    e1000g1 type: non-vlan mtu: 1500 device: e1000g1
    e1000g2 type: non-vlan mtu: 1500 device: e1000g2
    e1000g3 type: non-vlan mtu: 1500 device: e1000g3
    aggr1 type: non-vlan mtu: 1500 aggregation: key 1
    dladm show-aggr:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) device address speed
    duplex link state
    e1000g2 0:14:4f:ec:bc:1e 1000 Mbps full up attached
    e1000g3 <unknown> 1000 Mbps full up attached
    dladm show-aggr -L:
    key: 1 (0x0001) policy: L4 address: 0:14:4f:ec:bc:1e (auto) LACP mode: active LACP timer: short
    device activity timeout aggregatable sync coll dist defaulted expired
    e1000g2 active short yes yes yes yes no no
    e1000g3 active short yes yes yes yes no no
    dladm show-aggr -s:
    key: 1 ipackets rbytes opackets obytes %ipkts %opkts
    Total 464982722061215050501612388529872161440848661
    e1000g2 30677028844072327428231142100939796617960694 66.0 59.5
    e1000g3 15821243372049177622000967520476 64822888149 34.0 40.5
    Edited by: JimBuitt on Sep 26, 2008 12:04 PM

    JimBuitt wrote:
    I have a Enterprise T5220 server, running Solaris 10 that I am using as a backup server. On this server, I have a Layer 4, LACP-enabled link aggregation set up using two of the server's Gigabit NICs (e1000g2 and e1000g3) and until recently I was getting up to and sometimes over 1.5 Gb/s as desired. However, something has happened recently to where I can now barely get over 1 Gb/s.Is this with multiple backup streams or just one?
    I would not expect to get higher throughput with a single stream. Only with the aggregate throughput of multiple streams.
    Darren

  • Essbase copy issue

    Hi
    We have an issue with Essbase cube copy in EAS (essbase and eas are in version11.1.2.2) .  When we copy a cube we expect the access for the users and groups to be also available in the copied cube ie say groupA is assigned read access to cube1 but if we copy cube1 to cube2 we cannot see that groupA is also assigned read access to cube2 .  This issue did not happpen in System 9 and this is major issue for us since we rely on the process of taking snapshot of the cube monthly basis and losing the secuirty of the cube when copying is not acceptable

    Are you copying it within the server / across server?
    I am not much familiar with LCM but if you are using it may be I guess you can do it from there.
    If LCM cannot hep you, you can achieve it using Maxl as Sh!va said . Take an export of the security, parse it using BATCH (Windows) / SHELL SCRIPT (UNIX) and update the security again (Export will serve as security backup too ).
    We had an application way back in v6.5.7 and we did the same way.
    Amarnath
    http://amarnath-essbase-blog.blogspot.com

  • Complex Essbase MDX Issue - Need Guidance

    Hi,
    I have a complex Essbase issue in ASO version 11.1.2.2. Currently I have a MDX formula with a Measure member named '10th Percentile'. It calculates the 10th Percentile perfectly. So new requirement is to create a new Measures member and instead of calculating the '10 Percentile' value, it needs to display the Customer Name of the value that is the 10th Percentile from the Customer dimension. So if I do a retrieval and the '10th Percentile' is 3.23, then it needs to display the Customer Name of the 3.23.
    So I altered the formula to do what I think needs to be done and it verifies. However if I retrieve on that new measure in the Excel Add In, I get and error: An error [1200315] Occured in Spreadsheet Extractor. However if I navigate without data I don't get the error, but I also don't get any data, which I obviously need. So my question is, if MDX support reporting on Metadata not just Data, what/how can one report on it? Ideally I need to have this work in the Excel Add In as the client is using a custom vba modified template for their end users.
    Any ideas and help?

    Here's the formula. I bolded the part that is new.....
    IIF ( [Lbs Per Yard].CurrentMember IS [Lbs Per Yard].[No_Lbs/Yd] ,
    IIF( [Count_Price] = Missing, Missing, IIF( [Count_Price] < 2 , Missing,
    { Order (
    Filter ( CROSSJOIN ( Leaves ( [Service].CurrentMember)
    , Filter ( CROSSJOIN ( Leaves ( [Segment].CurrentMember)
    , Filter ( CROSSJOIN ( Leaves ( [Customer Type].CurrentMember)
    , Filter ( CROSSJOIN ( Leaves ( [Zip Code].CurrentMember)
    , Filter ( CROSSJOIN ( Leaves ( [Quantities].CurrentMember)
    , Filter ( CROSSJOIN ( Leaves ( [Frequencies].CurrentMember)
    , Filter ( CROSSJOIN ( Leaves ( [Yardages].CurrentMember)
    , Filter ( Leaves ( [Contract Year].CurrentMember)
    , [$/Yd] <> Missing ))
    , [$/Yd] <> Missing ))
    , [$/Yd] <> Missing ))
    , [$/Yd] <> Missing ))
    , [$/Yd] <> Missing ))
    , [$/Yd] <> Missing ))
    , [$/Yd] <> Missing ))
    , [$/Yd] <> Missing )
    , [$/Yd] /*this is the measure we're using for sort */
    , BASC /* sort in $/Yd in ascending order */
    ) AS OrderedSetOfItems} /* here we define an alias for the set in order to be able to use it later */
    .Item ( Round ( Count ( OrderedSetOfItems) *
    10 / 100 /*where we specify which percentile is being calculated */
    + 0.5 , 0 ) -1 ) *.Item (3-1).[MEMBER_NAME]*
    /* this takes Nth item from the ordered set (0-based index, hence -1) */
    /* .Name takes its name */
    , Missing )

  • BIGINT aggregation issue in Hana rev 91

    Hi,
    I have a BIGINT value field that isn't aggregating beyond 2147483648 (the max INTEGER value).
    I'm seeing results as follows:
    Period
    Value
    5
    320,272,401
    6
    635,021,492
    7
    515,993,660
    8
    546,668,931
    9
    702,138,445
    10
    438,782,780
    11
    459,387,988
    12
    722,479,250
    Result
    -2,147,483,648
    We've recently upgraded from rev 83 to 91. I'm pretty sure this is a new issue - has anyone else seen this?ect
    I'm hoping there is some kind of fix as I don't want to have to convert fields throughout our system to a longer DECIMAL.
    thanks
    Guy

    I've figured out this issue only affects Analytical Views that have calculated attributes.
    Such views generate a CALCULATION SCENARIO in _SYS_BIC, which seems to incorrectly define my field (which is in the data foundation, modelled as a BIGINT) as SQL Type 4, sqlLength 9, as per the following:
    {"__Attribute__": true,"name": "miles","role": 2,"datatype": {"__DataType__": true,"type": 66,"length": 18,"sqlType": 4,"sqlLength": 9},"kfAggregationType": 1,"attributeType": 0}
    I also have calculated measures modelled as BIGINT's in the Analytical View. These are correctly defined in the CALCULATION SCENARIO with an SQL length of 18, for example:
    {"__Attribute__": true,"name": "count","role": 2,"datatype": {"__DataType__": true,"type": 66,"length": 18,"sqlType": 34,"sqlLength": 18},"kfAggregationType": 1,"attributeType": 0}
    This looks like a bug to me. As a work around I had to define a calculated measure BIGINT which simply equals my "miles" field. Then hide the original field.

  • Essbase Deploy issue

    When I run the Diagnostics util it returns all green,
    I get this error trying to deploy an Essbase app.
    WarningA 'Internal Server Error' error occurred communicating with the server.URI: http://devone.serverone.com:19000/awb/integration.verifyApplication.do
    Status: 500 - Internal Server Error
    Content: text/html; charset=UTF-8
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Draft//EN">
    <HTML>
    <HEAD>
    <TITLE>Error 500--Internal Server Error</TITLE>
    </HEAD>
    <BODY bgcolor="white">
    <FONT FACE=Helvetica><BR CLEAR=all>
    <TABLE border=0 cellspacing=5><TR><TD><BR CLEAR=all>
    <FONT FACE="Helvetica" COLOR="black" SIZE="3"><H2>Error 500--Internal Server Error</H2>
    </FONT></TD></TR>
    </TABLE>
    <TABLE border=0 width=100% cellpadding=10><TR><TD VALIGN=top WIDTH=100% BGCOLOR=white><FONT FACE="Courier New"><FONT FACE="Helvetica" SIZE="3"><H3>From RFC 2068 <i>Hypertext Transfer Protocol -- HTTP/1.1</i>:</H3>
    </FONT><FONT FACE="Helvetica" SIZE="3"><H4>10.5.1 500 Internal Server Error</H4>
    </FONT><P><FONT FACE="Courier New">The server encountered an unexpected condition which prevented it from fulfilling the request.</FONT></P>
    </FONT></TD></TR>
    </TABLE>
    </BODY>
    </HTML>
    any suggestions?
    Thanks,
    P

    Which version of EPM you are using?
    If it is V11.1.1.3, I had similar kind of issue with FDM module. I was getting Server 500 error while trying to access FDM application URL and got it resolved with following steps. You can try same solution with Essbase.
    The issue was - EPM System Configurator copied the wrong WebLogic IIS plugin on the Windows 64-bit operating system.
    To work around this issue:
    1 Stop IIS from the following place:
    l IIS Admin Service*
    2 Copy iisforward.dll from: EPM_ORACLE_HOME/…/wlserver_10.3/server/plugin/ win/x64/iisforward.dll*
    to: EPM_ORACLE_HOME/DOMAIN_NAME/VirtualHosts/iisforward.dll*
    3 Copy iisproxy.dll from: EPM_ORACLE_HOME/…/wlserver_10.3/server/plugin/ win/x64/iisproxy.dll*
    to all folders under: EPM_ORACLE_HOME/DOMAIN_NAME/VirtualHosts*
    Hope it helps :-)
    Tej
    Edited by: Tej B on Jun 7, 2013 3:33 AM

  • Essbase - eis issue

    Hi,
    I have been facing an drillthrough issue started since couple of weeks...
    The users are not able to drillthrough the data (data is in sybase) from excel. They are getting the ESSDTU error message...
    I have re-run the "ais_start" and then its started working... why i have to do this very often... i have not had this issue before...
    PS: Some of our servers went down recently... from that day itself i am facing this issue...
    Please help/assist to resolve this issue.
    Regards,
    Senthil.
    Edited by: user12996257 on 01-Jul-2010 01:25
    Edited by: user12996257 on 05-Jul-2010 05:49

    Since I made the post I have explored some more.  The SQL statements for the drill-though report definitions work with all cost centres, when I run them in a SQL console (SqlServer management console). But even when I create a new drill though report, it still only works only for the cost centres that previously worked successfully with the existing drill through reports.  There is no signifigant difference in Essbase between the cost centres that work and the ones that dont , so I am completely stumped
    HELP !

  • Essbase filters issue

    Hi guys
    Can security filters in Essbase created by separating the dimensions like in.. filter 1 restricting access on one dimension, filter 2 restricting access on another three dimensions and filter 3 on remaining three dimensions? so that when access have to modified as time passes only 1 filter to be changed and not all three?
    I have created security filters in Essbase in the above mentioned fashion and they do not seem to work..
    Should a filter mention all the dimensions to work properly?
    Thanks
    Edited by: dave78 on Jun 27, 2012 5:50 AM

    Hi
    I've had similar issues in the past and this is what I discovered. A user or group can only be assigned a single filter at any one time, there is a way to give a user access to more than one filter by adding the user to multiple groups where each group has one filter that you want the user to inherit.
    The issue you are probably having is that Essbase filters join using different conditions depending upon what is selected. By that I mean that if you have two filters that have the members from the same dimension specified they will join using an AND condition, e.g.
    Filter1 - Write EntityX
    Filter2 - Write EntityY
    Resulting access - Write EntityX and EntityY
    However if the members are from different dimensions then the filters join using an OR condition, e.g.
    Filter1 - Write EntityX
    Filter2 - Write ScenarioA
    Resulting access - Write EntityX with any scenario and write ScenarioA with any entity
    If you use members from multiple dimensions, but the same dimensions then this seems to use the AND but is cumulative, e.g.
    Filter1 - Write EntityX, ScenarioA
    Filter2 - Write EntityY, ScenarioB
    Resulting access - Write EntityX or EntityY with either ScenarioA or ScenarioB
    Our situation was an Essbase cube built by EAL from HFM but we couldn't get EAL to pass the security through, it kept failing due to filter size. As in HFM security classes in our case were dimension specific we tried to follow that model in Essbase but it wouldn't work. As our Essbase model needed no write access for any user (all data passed from HFM so any changes were written there) we decided the only approach was to restrict the security to a single dimension (Entity) and accept that some users may be able to see scenarios that we didn't necessarily want them too. That was better than having to create 4 / 5 times as many filters and the corresponding groups to put the filters into etc...
    Hope this helps, apologies if you have to go back to the drawing board!
    Stuart

  • Essbase users issue

    Hi Guru's,
    we created around 1800 users for the current project. But the issue is if we synchronize all users from Shared services, Essbase might be hangs.
    these belong to native.
    So kindly let me know how to handle this issue?????
    Anyone seen this behavior before or have any suggestions?
    Thanks in Advance.................

    1.so when you have created users and assigned those users with role then you have to refresh security through eas,of course it will take some time for 1800 users after that you will have to connect again its just as that messagge says when you refresh it.
    2.now i will be sharing what i have done,basically i have users list and i made maxl scripts using excel having maxl statements(create user 'john doe';..) and i have executed them and same goes with roles tooo.When we execute this maxl scripts users will be in sysn with shared services.(this has worked for me )
    also go though from page 617
    http://download.oracle.com/docs/cd/E12825_01/epm.111/esb_dbag.pdf

  • Essbase Memory issue

    Hi All:
    I have an Essbase BSO cube with 35 G of page files and 6 G of index files. The version is 11.1.2.2. The data cache setting is 8G. The index cache setting is 6G. Now the issue is the memory for the essvr.exe service is about 16G and the memory is not released when no activities at the cube. The server has about 24G memory. Now I need to add two duplicate cube with the same configure to the server which makes the server memory always as 98%.
    My questions are:
    1. Is memory required equal to data cache + index cache + data file cache? How about the calc cache? Currently Calc cache high is 2G.
    2. Why the memory did not get released if there is no activity?
    3. What is the optimal setting for the caches? Now the index cache hit ratio is 100% and data cache hit ratio is 50%.
    4. What else can I do except asking sys admins to add more memories?
    PS. The cube is reset every night and reload all 35G data. After that an Agg is run.
    Thanks
    Edited by: user8838483 on Feb 26, 2013 3:28 PM

    By setting cache sizes large you prevent the op system from keeping things in ram that are most frequent needed
    This is done using memory mapped io automatically.
    So set the caches low except during calculations then since you want to have the index files in ram
    Much more than the data files you would raise the index cache just b4 the calc and lower it afterwards. Do it for loads also (large ones not simple lock and sends)
    I agree with Cameron this might do better as ASO but I also note that the ratio of index file size to data file size is quite large consider increasing your block size.

  • Essbase optimalization issue, slow

    Hi everyone,
    I have made a Planning application which has some larger forms which take over 4 min to open, which could be okay, but what puzzles me is that the Essbase server is not using any system resources when the forms are retrieved. CPU load goes between 0 and 1%, whicle memory stay approximately the same using only about 1 third of mem available.
    We have tuned outline, and cache setting, to my best knowledge according to what I can find of best practice. I wonder if this is normal, or anyone else experiencing the same issue?
    Suggestions? do I need to give more details of system and/or outline?
    Best Regards,
    Kåre

    Hi,
    The resources which might be being used are on the planning server or even on the clients browser, it all depends on the form design and the structure of your cube.
    Planning has to do alot of the processing and sometimes the queries on the essbase take no time at all, its planning constructing the grid which has can also take time.
    Some things to consider in form design :-
    Keep dense dimensions in rows and columns.
    Place static dimensions in POV and hide these dimensions where not relevant to the form.
    Place Scenario, Version, and Year dimensions in the Page wherever possible.
    Please sparse dimensions in the POV
    Minimize using account annotations on data forms.
    The biggest impact on data form performance is down to the grid size, grid size = possible number of rows x possible columns, also grid size doubles when multiple currencies are used.
    Enabling shared member security can also impact performance.
    Complex security can also impact performance.
    It is worth finding out where the problems lie, I have seen large forms in the past which can take a long time to download to the clients browser, then you add into the javascript processing for the form which adds overhead.
    It may also be worth considering using smart view if you can't improve the performance of the web forms.
    Cheers
    John
    http://john-goodwin.blogspot.com/

Maybe you are looking for

  • Mac mini slowness.....

    Using a mac mini Os 10. 5.8 running Mail and Photoshop very slow...what to do?

  • Whats with iPhone 5 battery

    my battery is ****, when I say **** i mean i might as well dont disconnect the phone from the power point  i carry a cable with me and charge the device in the car, at work, and at home, only use it without charge when i go for a smoke and the bustar

  • Unwanted echoes in Premiere elements 7

    Hi.  I am attempting to record some guitar by plugging it directly into my microphone jack.  When I attempt to record in PE7, however, I receive an echoing feedback of some sort, where the music I just played is repeated over and over, getting quiete

  • OS 10.9.4 Startup animation missing

    Hello, in order to spare some HD space in /private/var/folders I did a safe startup. After that procedure the animation right before the login screen during the startup procedure is missing. I know the issue is more cosmetic than serious but the bug

  • XSU for ORACLE v8.1.5?

    I am running ORACLE v8.1.5 and I would like to use the XML SQL utility. So, I believe I need to download XSU111 rather than XSU12. Is this the case? And, if so, from where do I download XSU111? I have only been able to find XSU12. Thanks.