EPM Performance Questions

Hi Experts
We have some concerns over performance in our BPC 10 (BW SP8, EPM SP13) performance.  I wanted to post a message here to hopefully get a better understanding of areas where we are currently uncertain and measures we could perhaps look at in order to improve.
1. Every time we transport once of our BPC Input Schedules / Reports, we need to 'Save it Down' in the environment where it has been transported.  This procedure involves opening the report, refreshing the data in the report and then saving it.  If we don't do this, the report will still load but in a time of 10+ minutes rather than the 3 minutes we expect.  Any ideas why this is ?
2. We experience improved performance when someone has run a particular report with identify selections.  I expect this is to do with the OLAP caching on the BW server.  Has anyone worked with this cache in order to improve performance ?  Is this cache redundant after every write back to the cube / model ?
3. Our single model is getting rather large (50M records) and I'm looking into options for archiving / improving the setup.  Is 50M a cause for concern ?  I see a message on the boards earlier with someone having billions of records so perhaps not....
4. Finally, I notice our reports / input schedules hang in the same place every time they are run.  Can anyone advise what can be checked ?  We are not using any Member Formulas
Thanks in advance for any help / guidance you can offer.
Ian

yeh, everyone thinks that at some point. it relies,
however, on you being perfect and knowing everything
in advance, which you aren't, and don't (no offence,
the same applies to all of us!). time and time again,
people do this up-front and discover that what they
thought would be a bottleneck, isn't. plus,
the JVM is much smarter at optimizing code than you
think: trust it. the best way to get good performance
out of your code is to write simple, straightforward
good OO code. JVMs are at a point now where they can
optimize java to outperform equivalent C/C++ code
(no, really) but since they're written by human
beings, who have real deadlines and targets, the
optimizations that make it into a release are the
most common ones. just write your application, and
then see how it performs. trust me on this
have a read of
[url=http://java.sun.com/developer/technicalArticles/I
nterviews/goetz_qa.html]this for more info anda chance to see where I plagiarized that post from :-)
Thanks for that link you gave me :)
Was usefull to read.
About time and money of programming, that is not really an issue for me atm since i'm doing this project for a company, but through school (it's like working but not for money).
Of course it should not last entirely long but I got time to figure out alot of things.
For my next project I will try to focus some more on building first, optimizing performance later (if it can be done with a good margin, since it seems the biggest bottlenecks are not the code but things outside the code).
@promethuuzz
The idea was to put collection objects (an object that handles the orm objects initialized) in the request and pass them along to the jsp (this is all done through a customized mvc model).
So I wanted to see if this method was performance heavy so I won't end up writing the entire app and finding out halve of it is very performance heavy :)

Similar Messages

  • Simple performance question

    Simple performance question. the simplest way possible, assume
    I have a int[][][][][] matrix, and a boolean add. The array is several dimensions long.
    When add is true, I must add a constant value to each element in the array.
    When add is false, I must subtract a constant value to each element in the array.
    Assume this is very hot code, i.e. it is called very often. How expensive is the condition checking? I present the two scenarios.
    private void process(){
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             if (add)
             matrix[i][ii][iii][...]  += constant;
             else
             matrix[i][ii][iii][...]  -= constant;
    private void process(){
      if (add)
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
             matrix[i][ii][iii][...]  += constant;
    else
    for (int i=0;i<dimension1;i++)
    for (int ii=0;ii<dimension1;ii++)
      for (int iii=0;iii<dimension1;iii++)
        for (int iiii=0;iiii<dimension1;iiii++)
           matrix[i][ii][iii][...]  -= constant;
    }Is the second scenario worth a significant performance boost? Without understanding how the compilers generates executable code, it seems that in the first case, n^d conditions are checked, whereas in the second, only 1. It is however, less elegant, but I am willing to do it for a significant improvement.

    erjoalgo wrote:
    I guess my real question is, will the compiler optimize the condition check out when it realizes the boolean value will not change through these iterations, and if it does not, is it worth doing that micro optimization?Almost certainly not; the main reason being that
    matrix[i][ii][iii][...]  +/-= constantis liable to take many times longer than the condition check, and you can't avoid it. That said, Mel's suggestion is probably the best.
    but I will follow amickr advice and not worry about it.Good idea. Saves you getting flamed with all the quotes about premature optimization.
    Winston

  • BPM performance question

    Guys,
    I do understand that ccPBM is very resource hungry but what I was wondering is this:
    Once you use BPM, does an extra step decreases the performance significantly? Or does it just need slightly more resources?
    More specifically we have quite complex mapping in 2 BPM steps. Combining them would make the mapping less clear but would it worth doing so from the performance point of view?
    Your opinion is appreciated.
    Thanks a lot,
    Viktor Varga

    Hi,
    In SXMB_ADM you can set the time out higher for the sync processing.
    Go to Integration Processing in SXMB_ADM and add parameter SA_COMM CHECK_FOR_ASYNC_RESPONSE_TIMEOUT to 120 (seconds). You can also increase the number of parallel processes if you have more waiting now. SA_COMM CHECK_FOR_MAX_SYNC_CALLS from 20 to XX. All depends on your hardware but this helped me from the standard 60 seconds to go to may be 70 in some cases.
    Make sure that your calling system does not have a timeout below that you set in XI otherwise yours will go on and finish and your partner may end up sending it twice
    when you go for BPM the whole workflow
    has to come into action so for example
    when your mapping last < 1 sec without bpm
    if you do it in a BPM the transformation step
    can last 2 seconds + one second mapping...
    (that's just an example)
    so the workflow gives you many design possibilities
    (brigde, error handling) but it can
    slow down the process and if you have
    thousands of messages the preformance
    can be much worse than having the same without BPM
    see below links
    http://help.sap.com/bp_bpmv130/Documentation/Operation/TuningGuide.pdf
    http://help.sap.com/saphelp_nw04/helpdata/en/43/d92e428819da2ce10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/xi/3.0/sap%20exchange%20infrastructure%20tuning%20guide%20xi%203.0.pdf
    BPM Performance tuning
    BPM Performance issue
    BPM performance question
    BPM performance- data aggregation persistance
    Regards
    Chilla..

  • Swing performance question: CPU-bound

    Hi,
    I've posted a Swing performance question to the java.net performance forum. Since it is a Swing performance question, I thought readers of this forum might also be interested.
    Swing CPU-bound in sun.awt.windows.WToolkit.eventLoop
    http://forums.java.net/jive/thread.jspa?threadID=1636&tstart=0
    Thanks,
    Curt

    You obviously don't understand the results, and the first reply to your posting on java.net clearly explains what you missed.
    The event queue is using Thread.wait to sleep until it gets some more events to dispatch. You have incorrectly diagnosed the sleep waiting as your performance bottleneck.

  • Xcontrol: performance question (again)

    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Is there a way to reduce the cpu-load when using xcontrols? 
    If there isn't and if this is not a problem with my installation but a known issue, I think this would be a potential point for NI to fix in a future update of LV.
    Regards,
    soranito
    Message Edited by soranito on 04-04-2010 08:16 PM
    Message Edited by soranito on 04-04-2010 08:18 PM
    Attachments:
    XControl_performance_test.zip ‏60 KB

    soranito wrote:
    Hello,
    I've got a little performance question regarding xcontrols. I observed rather high cpu-load when using xcontrols. To investigate it further, I built a minimal xcontrol (boolean type) which only writes the received boolean-value to a display-element in it's facade (see attached example). When I use this xcontrol in a test-vi and write to it with a rate of 1000 booleans / second, I get a cpu-load of about 10%. When I write directly to a boolean display element instead of the xcontrol,I have a load of 0 to 1 %. The funny thing is, when I emulate the xcontrol functionality with a subvi, a subpanel and a queue (see example), I only have 0 to 1% cpu-load, too.
    Okay, I think I understand question  now.  You want to know why an equivalent xcontrol boolean consumes 10x more CPU resource than the LV base package boolean?
    Okay, try opening the project I replied yesterday.  I don't have access to LV at my desk so let's try this. Open up your xcontrol facade.vi.  Notice how I separated up your data event into two events?  Go the data change vi event, when looping back the action, set the isDataChanged (part of the data change cluster) to FALSE.  While the data input (the one displayed on your facade.vi front panel), set that isDataChanged to TRUE.  This is will limit the number of times facade will be looping.  It will not drop your CPU down from 10% to 0% but it should drop a little, just enough to give you a short term solution.  If that doesn't work, just play around with the loopback statement.  I can't remember the exact method.
    Yeah, I agree xcontrol shouldn't be overconsuming system resource.  I think xcontrol is still in its primitive form and I'm not sure if NI is planning on investing more times to bug fix or even enhance it.  Imo, I don't think xcontrol is quite ready for primetime yet.   Just too many issues that need improvement.
    Message Edited by lavalava on 04-06-2010 03:34 PM

  • MBP with 27" Display performance question

    I'm looking for advice regarding improving the performance, if possible, of my  Macbook Pro and new 27'' Apple display combination.  I'm using a 13" Macbook Pro 2.53Ghz with 4GB RAM, NVIDIA GeForce 9400M graphics card and I have 114GB of the 250GB of HD space available.  What I'm really wondering is is this enough spec to run the 27" display easily.  Apple says it is… and it does work, but I suspect that I'm working at the limit of what my MCB is capable of.  My main applications are Photoshop CS5 with Camera RAW and Bridge.  Everything works but I sometimes get lock ups and things are basically a bit jerky.  Is the bottle neck my 2.53Ghz processor or the graphics card?  I have experimented with the Open GL settings in Photoshop and tried closing all unused applications.  Does anyone have any suggestions for tuning things and is there a feasible upgrade for the graphics card if such a thing would make a difference?  I have recently started working with 21mb RAW files which I realise isn't helping.  Any thoughts would be appreciated.
    Matt.

    I just added a gorgeous LCD 24" to my MBP setup (the G5 is not Happy) The answer to your question is yes. Just go into Display Preferences and drag the menu bar over to the the 24 this will make the 24 the Primary Display and the MBP the secondary when connected.

  • Performance question about 11.1.2 forms at runtime

    hi all,
    Currently we are investigating a forms/reports migration from 10 to 11.
    Initialy we were using v. 11.1.1.4 as the baseline for the migration. Now we are looking at 11.1.2.
    We have the impression that the performance has decreased significantly between these two releases.
    To give an example:
    A wizard screen contains an image alongside a number of items to enter details. In 11.1.1.4 this screen shows up immediately. In 11.1.2 you see the image rolling out on the canvas whilst the properties of the items seem to be set during this event.
    I saw that a number of features were added to be able to tune performance which ... need processing too.
    I get the impression that a big number of events are communicating over the network during the 'built' of the client side view of the screen. If I recall well during the migration of 6 to 9, events were bundled to be transmitted over the network so that delays couldn't come from network roundtrips. I have the impression that this has been reversed and things are communicated between the client and server when they arrive and are not bundled.
    My questions are:
    - is anyone out there experiencing the same kind of behaviour?
    - if so, is there some kind of property(ies) that exist to control the behaviour and improve performance?
    - are there properties for performance monitoring that are set but which cause the slowness as a kind of sideeffect and maybe can be unset.
    Your feedback will be dearly appreciated,
    Greetigns,
    Jan.

    The profile can't be changed although I suspect if there was an issue then banding the line would be something they could utilise if you were happy to do so.
    It's all theoretical right now until you get the service installed. Don't forget there's over 600000 customers now on FTTC and only a very small percentage of them have faults. It might seem like lots looking on this forum but that's only because forums are where people tend to come to complain.
    If you want to say thanks for a helpful answer,please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution’

  • Controlfile on ASM performance question

    Seeing Controlfile Enqueue performance spikes, consideration are to move control file to separater diskgroup(need outage) ? or add some disk(from different luns,( i prefer this approach) in the same disk group , seems like slow disk is casing this issue...
    2nd question :can snapshot controlfile be placed on ASM storage?

    Following points may help:
    - Separating the control file to another diskgroup may make things even worse in case that the total number of disks are insufficient in the new disk group.
    - Those control file contention issues are usually nothing to do with the storage throughput you have but the number of operations requiring different levels of exclusion on the control files.
    - Since multiple copies of controlfiles are updated concurrently a possible, sometimes, problem is that the secondary copy of controlfile is slower than the other. Please check that this is not the issue (different tiers of storage may cause such problems)
    Regards,
    Husnu Sensoy

  • Editing stills with motion effects, performance questions.

    I am editing a video in FCE that consists solely of still photos.
    I am creating motion effects (pans and pullbacks, etc) and dissolve
    transitions, and overlaying titles. It will be played back on dvd
    on a 16:9 monitor (standard dvd,not blueray hi-def). Some questions:
    What is the FCE best setup to use for best image quality: DV-NTSC?
    DV-NTSC Anamorphic? or is it HDV-1080i or 720p30 even though it
    won't be played back as hi-def?
    How do best avoid squiggly line problem with pan moves etc?
    On my G-5, 2gb RAM, single processor machine I seem to be having
    performance problems with playback: slow to render, dropping frames, etc
    Thanks for any help!

    Excellent summary MacDLS, thanks for the contribution.
    A lot of the photos I've taken on my camera are 3072 X 2304 (resolution 314) .jpegs.
    I've heard it said that jpegs aren't the best format for Motion, since they're a compressed format.
    If you're happy with the jpegs, Motion will be, too.
    My typical project could either be 1280 X 720 or SD. I like the photo to be a lot bigger than the
    canvas size, so I have room to do crops and grows, and the like. Is there a maximum dimension
    that I should be working with?
    Yes and no. Your originals are 7,000,000 pixels. Your video working space only displays about 950,000 pixels at any single instant.
    At that project size, your stills are almost 700% larger than the frame. This will tax any system as you add more stills. 150% is more realistic in terms of processing overhead and I try to only import HUGE images that I know are going to be tightly cropped by zooming in. You need to understand that an 1300x800 section of your original is as far as you can zoom in , the pixels will be 100% in size. If you zoom in further, all you get are bigger pixels. The trade off you make is that if you zoom way out on your source image, you've thrown away 75% of its content to scale it to fit the video format; you lose much much more if you go to SD.
    Finally, the manual says that d.p.i doesn't matter in Motion, so does this mean that it's worth
    actually exporting my 300 dpi photos to 72 dpi before working with them in Motion?
    Don't confuse DPI with resolution. Your video screen will only show about 900,000 pixels in HD and about 350,000 pixels in SD totally regardless of how many pixels there are in your original.
    bogiesan

  • 9 shared objects performance question

    I have 9 shared objects 8 of which contain dynamic data which i cannot really consolidate into a single shared object because the shared object often has to be cleared.....my question is what performance issues will I exsperience with this number of shared objects .....I maybe wrong in thinking that 9 shared objects is alot.....anybody with any exsperience using multiple shared objects plz respond.

    I've used many more than 9 SO's in an application without issue. I suppose what it really comes down to is how many clients are connected to those SO's and how often each one is being updated.

  • Import: performance question

    Hi, what is the different between these statements for application in term of performance?
    import java.io.*;
    and
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.IOException;
    Which one is faster for execution?

    Neither. Search the forums or web for the countless answers to the same question.

  • Functions slowing down performance question

    Hey there.
    I've got a query that really slogs. This query calls quite a few functions and there's no question that some of the work that needs to be done, simply takes time.
    However, someone has adamantly told me that using functions slow down the query compared to the same code in the base SQL.
    I find this hard to believe that the exact same code - whether well written or not - would be much faster in the base view than having a view call the functions.
    Is this correct that functions kill performance?
    Thanks for any advice.
    Russ

    There is the performance impact of context switching between SQL and PL/SQL engines. Pure SQL is always faster.
    SQL> create or replace function f (n number) return number as
      2  begin
      3    return n + 1;
      4  end;
      5  /
    Function created.
    SQL> set timing on
    SQL> select sum(f(level)) from dual
      2  connect by level <= 1000000;
    SUM(F(LEVEL))
       5.0000E+11
    Elapsed: 00:00:07.06
    SQL> select sum(level + 1) from dual
      2  connect by level <= 1000000;
    SUM(LEVEL+1)
      5.0000E+11
    Elapsed: 00:00:01.09

  • PL/SQL performance questions

    Hi,
    I am responsible for a large, computation-intensive PL/SQL program that performs some batch processing on a large number of records.
    I am trying to improve the performance of this program and have a couple of questions that I am hoping this forum can answer.
    I am running Oracle 11.1.0.7 on Windows.
    1. How does compiling with DEBUG information affect performance?
    I found that my program units (packages, procedures, object types, etc) run significantly slower if they are compiled with debug information
    I am trying to understand why this is so. Does debug information instrument the code and result in more code that needs to be executed?
    Does adding debug information prevent compiler optimizations? both?
    The reason I ask this question is to understand if it is valid to compare the performance of two different implementations if they are both compiled with debug information. For example, if one approach is 20% faster when compiled with debug information, is it safe to assume that it will also be 20% faster in production (without debug information)? Or, as I expect, does the presence of debug information change the performance profile of the code?
    2. What is the best way to measure how long a PL/SQL program takes?
    I want to compare to approaches, such as using a VARRAY vs. a TABLE variable. I have been doing this by creating two test procedures that performs the same task using the two approaches I want to evalulate.
    How should I measure the time an approach takes so that it is not affected by other activity on my system? I have tried using CPU time (dbms_utility.get_cpu_time) and elapsed time. CPU time seems to be much
    more consistent between runs, however, I am concerned that CPU time might not reflect all the time the process takes.
    (I am aware of the profiler and have used that as well, however, I am at the point where profiling is providing diminishing returns).
    3. I tried recompiling my entire system to be native compiled but to my great surprise, did not notice any measurable difference in performance!
    I compiled all specification and bodies in all schemas to be native compiled. Can anyone explain why native compilation would not result in a significant performance improvement on a process that seems to be CPU-bound when it is running? Are there any other settings or additional steps that need to be performed for native compilation to be effective?
    Thank you,
    Eric

    Yes, debug must add instrumentation. I think that is the point of it. Whether it lowers the compiler optimisation level I don't know (I haven't read anywhere that it does) but surely if you stepping through code manually to debug it then you don't care.
    I don't know of a way to measure pure CPU time independently of other system activity. One common approach is to write a test program that repeats your sample code a large enough number of times for a pattern to emerge. To find how much time individual components contribute, dbms_profiler can be quite helpful (most conveniently via a button press in IDEs such as PL/SQL Developer, but it can also be invoked from the command line.)
    It is strange that no native compilation appears to make no difference. Are you sure everything is actually using it? e.g. is it shown as natively compiled in ALL_PLSQL_OBJECT_SETTINGS?
    I would not expect a PL/SQL VARRAY variable to perform any differently to a nested table one - I expect they have an identical internal implementation. The difference is that VARRAYs have much reduced functionality and a normally unhelpful limit setting.
    Edited by: William Robertson on Nov 6, 2008 11:49 PM

  • Exporting to OMF for audio and After effect + delay in performance question

    Hello
    I have two questions.
    First I have a problem with performance of FCP 5 I'm using a 500 GB firewire 800 external drive (not . Every time I hit the play button there is a delay of 2 seconds before playing my sequence I'm not sure why this is happening, I have a very basic sequence with one layer.
    Second question - I'm trying to make FCP popular in my country but it is very hard since we work with OMF both in Protools and in After Effects. In Avid we exported media embedded OMF and also with audio. I know that there is an option to export OMF with audio but Protools has a hard time opening it. As for Video there is no choice, what can we do?
    Thank you

    Anyone?

  • Named Searches - Performance Questions

    Dear MDM Pros,
    I have question regarding the performance of Named Searches.
    I have a repository with 600.000 datasets (and various lookup tables) now I need to setup named searches for restricting the access to the data.
    I have one field with classifications (Number 8 digits / 15.000 different classes) on that I want to restrict access. The restriction should work on the first 2 digits of a class.
    Example Classification:
    21010509
    21010503
    21010504
    21010507
    19050711
    19050912
    31020530
    Rule:
    LEFT(CLASSIFICATION,2) >= 19 AND LEFT(CLASSIFICATION,2) <= 21
    So my idea is to use this
    LEFT(CLASSIFICATION,2) >= 19 AND LEFT(CLASSIFICATION,2) <= 21
    Expression in the search and save this as Named Search.
    As wrote before I think this is really slow.
    Can anybody give me a hint how to find a performance optimized solution for this problem.
    Best regards
    Roman

    Hi Christian,
    here is what SAP said:
    07.10.2008 - 16:22:51 CET - Antwort von SAP     
    Dear Mr Becker,
    I have looked at this issue that you described.
    I have attached 2 notes: 1077701 and 1138862.
    Note 1077701 describes under " 6) Named Search" that the use of
    expressions is not supported for named search.
    The reason for this is that an expression can result in extensive searchoperation as you experience it. The response time in MDM Data Manager
    was in a range of 2-3 minutes when I executed a free form search
    with the expression you used in my environment. My observation in
    our SAPconnect session yesterday was that you have a similar response
    time with MDM Data Manager in your environment.
    Note 1138862 describes under "Situation 2:" that the
    SRM-MDM Catalog Search UI will show a 5-7 times slower response
    time then the MDM Data Manager for the same search.
    This is why you experience response times of 10-15 minutes in
    SRM-MDM Catalog Search UI when calling it with a named search that
    uses an expression.
    Kind regards,
    Alexander Ohlgart
    I am running MDM with 5.5 SP3 actual fix.
    HTH
    Roman

Maybe you are looking for

  • Podcast No Longer Showing Up on iTunes Search Results

    We submitted our podcast months ago and it showed up on the iTunes store just fine. Somewhere along the way, someone on our end introduced a code error in the .xml file that broke the links to the audio files. We didn't find this issue out for quite

  • Acrobat Pro 8.0 CS3 - PDF file print in Mac OS 10.5.8 fixed!

    Since upgrade from version 7.0, Acrobat Pro PDF 8.0 printer kept pausing and wouldn't print document. I previously had same trouble with the Acrobat PDF 7.0 printer, but eventually got that to work with a command entered via terminal mode: Adobe PDF

  • Merging "on my mac" with "iCloud" events, on my iCal

    After being frustrated for a while with iCloud for not putting iCal events on my iCloud and trying to figure out why, I finally noticed there were 2 different types of Calendars, "ON MY MAC" and "ICLOUD." How do I merge those 2 to make them one in th

  • RMAN with Netbackup - ALL Tape backups expired

    Hello, I am currently experiencing some issues between Netbackup and RMAN. A bounce of the Netbackup server seems to have marked all our tape backups managed by RMAN as 'Expired'. This means RMAN can no longer locate any of the tape backups. We have

  • 'get info' on itunes isn't responding

    I just uploading my itunes to the current version to get the 'get info' bar and now it isnt responding why is this?