BPC Performance with large number of dimensions's members

Hi,
I would like to know if there is a limitation on the number of members in one dimension. This dimension, named PROJET, is often used in expansion on our input schedule reports (to retrieve the projects which belong to the entity entered in the current view).
With approximate 2500 members in this dimension, the report takes about 4 minutes to be expansed (or even open).
We have 8 dimensions with few to 300 members. The PROJECT dimension is the biggest in terms of members's number.
Thank you in advance for your feedback !
Helene

Hi Helene,
With 3,000 members you should not be experiencing these problems, if your report is designed properly.
I'm running BPC 5.1 SP8 on SQL 2005, with a dimension containing 22,000 members. Client PC's are typical (XP, Excel 2007, 1 or 2 gig RAM).
Using EVDRE and this dimension expanding on the rows, most reports & input schedules can expand & refresh in the range of 10 to 30 seconds.
The faster times are when I use a row expansion memberset using dimension properties, such as Active="X". The slower times are when the memberset is hierarchy-based, such as BAS.
If you're using a dynamic template (one using EVEXP for the expansion) then you should start over using EVDRE. It will be much faster, particularly if you optimize your row expansion.
It's often a good idea to add dimension properties specifically for the purpose of optimizing the report expansion, if the dimension has thousands of members. I sometimes go as far as to add properties which mimic the hierarchy (MyLevel2, MyLevel3, etc) just for this purpose.

Similar Messages

  • TableView performance with large number of columns

    I notice that it takes awhile for table views to populate when they have a large number of columns (> 100 or so subjectively).
    Running VisualVM based on CPU Samples, I see that the largest amount of time is spent here:
    javafx.scene.control.TableView.getVisibleLeafIndex() 35.3% 8,113 ms
    next is:
    javfx.scene.Parent$1.onProposedChange() 9.5% 2,193 ms
    followed by
    javafx.scene.control.Control.loadSkinClass() 5.2% 1,193 ms
    I am using JavaFx 2.1 co-bundled with Java7u4. Is this to be expected, or are there some performance tuning hints I should know?
    Thanks,
    - Pat

    We're actually doing some TableView performance work right now, I wonder if you could file an issue with a simple reproducible test case? I haven't seen the same data you have here in our profiles (nearly all time is spent on reapplying CSS) so I would be interested in your exact test to be able to profile it and see what is going on.
    Thanks
    Richard

  • ALV performance with large number of columns

    Dear friends,
    I have created an ALV grid which has approximately 225 fields in it using Classes an not REUSE fms..
    After the ALV grid is first displayed if the user scrolls down to the next page it takes significant time to display the data in the next page.. The documentation says that the ALV grid only caches the data upon display (After first display of a page and before the ALV grid data is refreshed it works fine).  
    Is there any mechanism of caching the entire ALV grid data before/after the method set_table for_first_display is called??
    Helpful answers will be appropriately rewarded..
    Cheers
    Nitesh

    We're actually doing some TableView performance work right now, I wonder if you could file an issue with a simple reproducible test case? I haven't seen the same data you have here in our profiles (nearly all time is spent on reapplying CSS) so I would be interested in your exact test to be able to profile it and see what is going on.
    Thanks
    Richard

  • Hyperion Financial Reporting server 9.3.1 - Performance with Large batches

    I have not been able to find any help yet, so hopefully someone here can help.
    We have several financial reporting servers so that we can schedule reports and have users on seperate servers so that they do not interfere with each other.
    The problem is that when bursted batch reports that select 100 - +1000 members from the bursting dimension run, the resources are maxing out memory and if multiple batches with the same large number of (but different members) are run at the same time, we start having failures on the services where they hang or worse the server crashes (one server crashed early this morning).
    The Windows 2003 servers are Dell 2950 1x intel core 2 duo 3GHz with 8GB of RAM.
    We found that if we set the java memory parms at anything higher than 1.5GB the services do not start, so all 8GB available is hardly being used by the server since the FR services (batch scheduler, print server, reports, RMI) are the only things running.
    The batches are bursting the report for each member to a network folder for users to access and for archival purposes.
    We may need to get Oracle involved, but if anyone here has insight I'd appreciate the assistance.

    Hi Robert
    I have come across similar issues where the reports take much longer to run as part of a batch than it does when accessed direct via the workspace. Our issue was that Financial Reporting was not dropping the connection to Essbase. We were using windows os and had to add a few DWORDs to the registry:
    1. Open the registry and navigate to Local Machine\System\CurrentControlSet\Services\TCPIP\Parameters
    2. Add new DWORD Value named TcpTimedWaitDelay, right click and select Modify. Select decimal radio button, type in 30 (this is the number of seconds that TCP/IP will hold on to a connection before it is released and made available again, the default is 120 seconds)
    3. Add new DWORD Value named MaxUserPort, right click and select Modify. Select decimal radio button, type in 65534 (this determines the highest port number TCP can assign when an application requests an available user port from the system, the default is 5000)
    4. Add new DWORD Value named MaxFreeTcbs, right click and select Modify. Select decimal radio button, type in 6250 (this determines the number of TCP control blocks (TCBs) the system creates to support active connections, the default is 2000. As each connection requires a control block, this value determines how many active connections TCP can support simultaneously. If all control blocks are used and more connection requests arrive, TCP can prematurely release connections in the TIME_WAIT state in order to free a control block for a new connection)
    I think we did this to both our essbase and application server and rebooted both afterwards, it made a dramatic improvement to batch times!!
    As a personal note I try not to have too many batches running at once as they can interfere with each other and it can lead to failures. Where I have done this before we tend to use windows batch (.bat) files to launch the FR batches from the command line, if time allows I run a few batches to get a reasonable estimate of the time to complete and in my .bat file I add a pause of around that amount of time in between sending the batch requrests to the scheduler. Admittedly I've not done it yet where the number of reports in a bursting batch is as many as 1000.
    Hopefully this will help
    Stuart

  • How to Capture a Table with large number of Rows in Web UI Test?

    HI,
    Is there any possibility to capture a DOM Tabe with large number of Rows (say more than 100) in Web UI Test?
    Or is there any bug?

    Hi,
    You can try following code to capture the table values.
    To store the table values in CSV :
    *web.table( xpath_of_table ).exportToCSVFile("D:\exporttable.csv", true);*
    TO store the table values in a string:
    *String tblValues=web.table( xpath_of_table ).exportToCSVString();*
    info(tblValues);
    Thanks
    -POPS

  • Lookups with large number of records do not return the page

    Hi,
    I am developing an application using Oracle JHeadstart 10.1.3 Preview Version 10.1.3.0.78
    In my application I created a lookup under domains and used that lookup for an attribute (Display Type for this attribute is: dropDownList) in a group to get the translation fro this attribute. The group has around 14,800 records and the lookup has around 7,400 records.
    When I try to open this group (Tab), the progress shows that it is progressing but it does not open even after a long time.
    If I change the Display Type for the attribute from dropDownList to textInput then it works fine.
    I have other lookups with lower number of records. Those lookups work fine with dropDownList Display Type.
    Only I have this kind of problem when I have a lookup with large number of records.
    Is there any limitation of record number for lookups under Domains?
    How I can solve this?
    I need to translate the attribute (get the description from another table using the code).
    Your help would be appreciated.
    Thanks
    Syed

    We have also faced similar issue, but us, it was happening when we were using the dropDownList in a table, while the same dropDownList was working in table format. In our case the JVM is just used to crash and after google'ing it here in forums, found that it might be related to some JVM issue on Windows XP machines without Service Pack 2.
    Anyway... the workaround that we taken to get around the issue is to use LOV instead of a dropDownList in your jHeadStart.
    Hope this helps...
    - rutwik

  • FR Layout issue with large number of columns

    Hi!
    I'm developing a report in FR 11.1.1.3 with over 30 columns.
    The issue is that when I run the report in web preview, the dropdown of dimension in page goes to the far right and disappears from the display.
    If I reduce the number of the columns I don't have this problem.
    I've already tried to maximize the workspace to the maximum without any result.
    Can anyone help me to deal with reports with large numbers of columns?
    Regards,
    Luís
    Edited by: luisguimaraes on 13-Mar-2012 06:48

    IE8 could be the reason. According to the supported platform matrices (http://www.oracle.com/technetwork/middleware/bi-foundation/oracle-hyperion-epm-system-certific-2-128342.xls), check tab EPM System Basic Platform, row 70, in order IE8 to work, FR and Workspace should be patched.
    FR Patch number: 9657652
    Workspace Patch number: 9314073
    Patches can be found on My Oracle Support. Just search for the patch number.
    Cheers,
    Mehmet

  • Barcode CODE 128 with large number (being rounded?) (BI / XML Publisher 5.6.3)

    After by applying Patch 9440398 as per Oracle's Doc ID 1072226.1, I have successfully created a CODE 128 barcode.
    But I am having an issue when creating a barcode whose value is a large number. Specifically, a number larger than around 16 or so digits.
    Here's my situation...
    In my RTF template I am encoding a barcode for the number 420917229102808239800004365998 as follows:
    <?format-barcode:420917229102808239800004365998;'code128c'?>
    I then run the report and a PDF is generated with the barcode. Everything looks great so far.
    But when I scan the barcode, this is the value I am reading (tried it with several different scanner types):
    420917229102808300000000000000
    So:
         Value I was expecting:     420917229102808239800004365998
         Value I actually got:         420917229102808300000000000000
    It seems as if the number is getting rounded at the 16th digit (or so, it varies depending of the value I use).
    I have tried several examples and all seem to do the same.  But anything with 15 digits or less seems to works perfectly.
    Any ideas?
    Manny

    Yes, I have.
    But I have found the cause now.
    When working with parameters coming in from the concurrent manager, all the parameters define in the concurrent program in EBS need to be in the same case (upper, lower) as they have been defined in the data template.
    Once I changed all to be the same case, it worked.
    thanks for the effort.
    regards
    Ronny

  • SSO with large number of users

    Hi,
    We want to implement SSO using user mapping because we have different user ids from system to system.
    We have large number of users in our system, how can we implement user mapping.
    Is there anyway to write a program so that it can take care of user mapping, if yes? can you please give overview so that i can dig in to it.
    Thanks,
    Damodhar.

    Hi Damodhar
    User mapping can be done in the programming level. The User Management Engine in EP 6.0 provides two interfaces to access the user mapping data namely
    1. IUserMappingService.
    2. IUserMappingData.
    You can implement these two intefaces to enable User Mapping. Please refer to the following link for further details.
    http://help.sap.com/saphelp_nw04/helpdata/en/69/3482ee0d70492fa63ffe519f5758f5/content.htm
    Hope that was helpful.
    Best Regards
    Priya

  • Looking for BPC user contacts with large numbers of dimension members

    I would like to make contact with other BPC users that have multiple dimensions with dimension members in excess of 10K.
    Thanks,
    Cary Schulz
    Newfield Exploration
    281-674-2004

    Hi Helene,
    With 3,000 members you should not be experiencing these problems, if your report is designed properly.
    I'm running BPC 5.1 SP8 on SQL 2005, with a dimension containing 22,000 members. Client PC's are typical (XP, Excel 2007, 1 or 2 gig RAM).
    Using EVDRE and this dimension expanding on the rows, most reports & input schedules can expand & refresh in the range of 10 to 30 seconds.
    The faster times are when I use a row expansion memberset using dimension properties, such as Active="X". The slower times are when the memberset is hierarchy-based, such as BAS.
    If you're using a dynamic template (one using EVEXP for the expansion) then you should start over using EVDRE. It will be much faster, particularly if you optimize your row expansion.
    It's often a good idea to add dimension properties specifically for the purpose of optimizing the report expansion, if the dimension has thousands of members. I sometimes go as far as to add properties which mimic the hierarchy (MyLevel2, MyLevel3, etc) just for this purpose.

  • Slow record selection in tableView component with large number of records

    Hi experts,
    we have a Business Server Page (flow logic) with several htmlb:inputField's. As known from SAP standard we would like to offer value helper (F4) to the users for the ease of record selection.
    We use the onValueHelp() method of the inputField to open a extra browser window through JavaScript. In the popup another html-website is called, containing a tableView component with all available records. We use the SINGLESELECT mode for the table view.
    Everything works perfect and efficient, unless the tableView contains too many entries. If the number of possible entries is large the whole component performs very very slow. For example the selection of the record can take more than one minute. Also the navigation between pages through the buttons at the bottom of the component takes a lot of time. It seems that the tableView component can not handle so many entries.
    We tried to switch between stateful and stateless mode, without success. Is there a way to perform the tableView selection without doing a server-round-trip? Any ideas and comments will be appreciated.
    Best regards,
    Sebastian

    Hi Raja,
    thank you for your hint. I took a look at sbspext_table/TableViewClient.bsp but did not really understand how the Java-Script coding works. Where is the JavaScript code in that example? Which file, does it contain.
    Meanwhile I implemented another way to evite the server round trip.
    - Switch page mode of the popup window to "Stateful"
    - Use OnInitialization method like OnCreate (as shown in [using OnInitialization like OnCreate])
    - Limit the results of the SELECT statement with UP TO 1000 ROWS
    Best regards,
    Sebastian

  • Large number of dimension members

    Hi All,
    I have the following config :-
    Facts :
    Volume and Amount
    Dimensions :
    Account : 3 million rows
    Customer : 3.5 million rows - 3 levels
    Facility : 65 000 rows
    Product : 558 rows - 7 levels
    Business Unit : 2454 rows - 9 levels
    Metric : 101 rows - 5 levels
    Broker : 9521 rows
    Time : 60 rows
    GL Product : 3277 rows - 14 levels
    My big obvious concern is creating a cube dimensioned by customer and account since these are pretty large.
    Q1. What is the recommendation with these large dimensions
    Q2. Is there a drill-through or hybrid OLAP option that I could use to stop the cube exploding with so many members ??
    Any comments/assistance appreciated.
    Regards,
    Brandon

    Hi,
    How many fact rows are you planning to load into your cube? This is more of a concern than the number of values in each dimension although the potential size of the cube you are describing does suggest to me that you might be better loading data in OLAP at a more aggregate level
    What are you using for your query tool? Delivery of a drill-through or hybrid solution will depend on this
    Stuart

  • Slow Performance with large library (PC)

    I've been reading many posts about slow performance but didn't see anything addressing this issue:
    I have some 40000 photos in my catalog and despite generating previews for a group of directories, LR still is very slow in scrolling through the pics in these directories.
    When I take 2000 of these pics and import them into a new catalog - again generating previews, the scroll through the pics happens much much faster.
    So is there some upper limit of recommended catalog size for acceptable performance?
    Do I need to split my pics up by year? Seems counter productive, but the only way to see the pics at an acceptable speed.

    I also have serious performance issues, and i don´t even have a large database catalog, only around 2.000 pictures, the db file itself is only 75 mb big. Done optimization - didn´t help. What i encountered is that the cpu usage of LR 1.1 goes up and STAYS up around 85% for 4-5 minutes after programm start - during that time, zooming in to an image can take 2-3 minutes! After 4-5 minutes, cpu usage drops to 0%, the background task (whatever LR does during that time!) has finished and i can work very smoothly. preview generation cannot be the problem, since it also happens when i´m working in a folder that already has all previews build, close LR, and start again instantly. LR loads and AGAIN i´ll have to wait 4-5 minutes untill cpu ussage has dropped so i can continue working with my images smoothly.
    This is very annoying! I will stop using LR and go back to bridge/acr/ps, this is MUCH much faster. BUMMER!

  • Slow Performance with large OR query

    Hi All;
    I am new to this forum... so please tread lightly on me if I am asking some rather basic questions. This question has been addressed before in this forum more than a year ago (http://swforum.sun.com/jive/thread.jsp?forum=13&thread=9041). I am going to ask it again. We have a situation where we have large filters using the OR operator. The searches look like:
    & (objectclass=something) (|(attribute=this) (attribute=that) (attribute=something) .... )
    We are finding that the performance between 100 attributes versus 1 attribute in a filter is significant. In order to increase performance, we have to issue the following filters in seperate searches:
    & (objectclass=something) (attribute=this)
    & (objectclass=something) (attribute=that)
    & (objectclass=something) (attribute=something)
    The first search takes an average of 60 seconds, and the combination of searches in the second filter takes an average of 4 seconds. This is a large performance improvement.
    We feel that this solution is not desirable because:
    1. When the server is under heavy load, this solution will not scale very well.
    2. We feel we should not have to modify our code to deal with a server deficiency
    3. This solution creates too much network traffic
    My questions:
    1. Is there a query optimizer in the server? If so, shouldn't the query optimizer take care of this?
    2. Why is there such a large performance difference between the two filters above?
    3. Is there a setting somewhere in the server (documented or undocumented) that would handle this issue? (ie average query size)
    4. Is this a known issue?
    5. Besides breaking up the filter into pieces, is there a better way to approach this type of problem?
    Thanks in advance,
    Paul Rowe

    I also have serious performance issues, and i don´t even have a large database catalog, only around 2.000 pictures, the db file itself is only 75 mb big. Done optimization - didn´t help. What i encountered is that the cpu usage of LR 1.1 goes up and STAYS up around 85% for 4-5 minutes after programm start - during that time, zooming in to an image can take 2-3 minutes! After 4-5 minutes, cpu usage drops to 0%, the background task (whatever LR does during that time!) has finished and i can work very smoothly. preview generation cannot be the problem, since it also happens when i´m working in a folder that already has all previews build, close LR, and start again instantly. LR loads and AGAIN i´ll have to wait 4-5 minutes untill cpu ussage has dropped so i can continue working with my images smoothly.
    This is very annoying! I will stop using LR and go back to bridge/acr/ps, this is MUCH much faster. BUMMER!

  • File Bundle with large number of files failed

    Hi!
    Well, I thought there will appear problems. We do have some apps for distribution just by copying large amount of files (not large in size) to Windows (XP Pro, usually) machines. These is some programs which works from directory wo any special need for installation. Happy situation for admin. From one side. In ZfD 4.0.1 we did install this app on one of machines and then did take snapshot via special app (who remember) and did copy file to (Netware) server share, give rights for device (~ workstation) and associate it with ws via eDir and ... voila, next restart or whatsoever and app was there. Very nice, indeed, I miss this!
    So, I tried to make this happen on ZCM 10 (on SLES 11). Did app, sorry, bundle, upload files (first time it stuck, second time id accomplish, around 7500 files) and did distribution/launch association to ws (~device). And ... got errors. Several entries in log as examples below.
    Any ideas?
    More thanks, Alar.
    Error: [1/8/10 2:41:53 PM] BundleManager BUNDLE.UnknownExceptionOccurred An Unknown exception occurred trying to process task: Novell.Zenworks.AppModule.LaunchException: Exception of type 'Novell.Zenworks.AppModule.LaunchException' was thrown.
    at Novell.Zenworks.AppModule.AppActionItem.ProcessAct ion(APP_ACTION launchType, ActionContext context, ActionSetResult previousResults)
    Error: [1/8/10 2:41:54 PM] BundleManager ActionMan.FailureProcessingActionException Failed to process action: Information for id 51846d2388c028d8c471f1199b965859 has not been cached. Did you forget to call CacheContentInfo first?

    ZCM10 is not efficient in handling that number of files in a single
    bundle when they are in the content repo.
    Suggestions include zipping the files and uploading to the content repo
    and then downloading and extracting the zip as part of the bundle.
    Or Use the "Copy Directory" option to copy the files from a Network
    Source Directly like you did in ZDM.
    On 1/8/2010 8:56 AM, NovAlf wrote:
    >
    > Hi!
    > Well, I thought there will appear problems. We do have some apps for
    > distribution just by copying large amount of files (not large in size)
    > to Windows (XP Pro, usually) machines. These is some programs which
    > works from directory wo any special need for installation. Happy
    > situation for admin. From one side. In ZfD 4.0.1 we did install this app
    > on one of machines and then did take snapshot via special app (who
    > remember) and did copy file to (Netware) server share, give rights for
    > device (~ workstation) and associate it with ws via eDir and ... voila,
    > next restart or whatsoever and app was there. Very nice, indeed, I miss
    > this!
    > So, I tried to make this happen on ZCM 10 (on SLES 11). Did app, sorry,
    > bundle, upload files (first time it stuck, second time id accomplish,
    > around 7500 files) and did distribution/launch association to ws
    > (~device). And ... got errors. Several entries in log as examples
    > below.
    > Any ideas?
    > More thanks, Alar.
    > ---
    > Error: [1/8/10 2:41:53 PM] BundleManager
    > BUNDLE.UnknownExceptionOccurred An Unknown exception occurred trying to
    > process task: Novell.Zenworks.AppModule.LaunchException: Exception of
    > type 'Novell.Zenworks.AppModule.LaunchException' was thrown.
    > at Novell.Zenworks.AppModule.AppActionItem.ProcessAct ion(APP_ACTION
    > launchType, ActionContext context, ActionSetResult previousResults)
    > ---
    > Error: [1/8/10 2:41:54 PM] BundleManager
    > ActionMan.FailureProcessingActionException Failed to process action:
    > Information for id 51846d2388c028d8c471f1199b965859 has not been cached.
    > Did you forget to call CacheContentInfo first?
    > ---
    >
    >

Maybe you are looking for

  • You can't activate your upgraded 3G phone, but want to use it? Read here.

    Simple. Please click on Helpful if it works for you! Get someone to send you an SMS. That's it. With a SIM in (from an old iPhone), but no iTunes activation, on receipt of an SMS the phone let me use it. Activation can wait until all is fixed. And so

  • Unpacking HU assigned to Delivery with RF (LM25)

    Dear, I am experiencing problems once I try to unpack picked Materials (Transfer Order is confirmed) from a Handling Unit assigned to an Outbound Delivery by means of RF. What I actually want to do is repack (1. unpack Materials from picked HU, 2. pa

  • Is that a good practice to use syncronize methods for application scope cls

    Is that a good practice to use synchronize method in a application scope class, so I have a doubt, there a is class A, it has application scope and it contains a synchronized method add, so because of some network traffic or any unexpected exception

  • Forgot my password for a Keynote Presentation.

    Hello! I accidently put a password on one of the my keynote presentations and I dont know what the password is. Does anyone know of anyway to unlock a keynote presentation without having to use the password (that I have no clue what it is)? Or, just

  • Lightroom Workflow

    Hello there, I am in need of some direction regarding workflow on import with LR3 before I jump off of edge 1. I'm wondering why when I'm importing from my SonyAlpha 850 into LR3 the images look much brighter and once the image is fully imported the