Metrics - Thresholds - Best Practices

All,
I installed em grid control 11g and configured the targets and notification rules. Now i am trying to set up thresholds, is there a best practice threshold values that someone can share for various metrics at weblogic level. I know this is a generic question and i can tune the thresholds for my environment/application usage. But there must be a ball-park threshold document somewhere for alert purposes.
Thanks in advance,
Prasad.

Hi Prasad,
There is no document giving recommendations on threshold values. Setting thresholds really depends on your environment. I recommend looking at performance history...looking at for instance a timeframe that had heavy load on the application/server but performance was good. Then set thresholds according to that....either adjust up or down as needed.
Thanks,
Nicole

Similar Messages

  • Trunk Port Threshold Best Practice?

    Using CiscoWorks LMS and I notice the notification threshold for switch port utilisation is set at 40%. I know I've seen this before, but I can't remember why 40% was the magic number. I've Googled and come up with nothing useful so I'm handing it over to the experts :)
    Does this have something to do with this value being an "average" rather than a peak? I'm struggling to understand why, in a fully switched network, 40% utilisation is something to be concerned about.
    Hope you can improve my education :)
    Cheers,
    Ben.

    Thanks Mohammed.
    I think I may have chosen my words poorly.
    What I'm really trying to understand is this:
    In a full-duplex, microsegmented network, which is essentially a collision-less environment, wouldn't it make more sense to set a utilisation threshold of around 80%? In that case, you'd actually be getting close to saturating your bandwidth and creating a bottleneck.
    At 40% utilisation, especially on a trunk port which you'd expect to run at a higher utilisation, you still have quite a large portion of free bandwidth.
    I'm still relatively new to the networking game, so I'm trying to get my head around something that others seem to take for granted. The question is really more general, about the 40% utilisation threshold figure, than about CW LMS specifically.
    Cheers,
    Ben.

  • Oracle SLA Metrics and System Level Metrics Best Practices

    I hope this is the right forum...
    Hey everyone,
    This is what I am looking for. We have several SLA's setup and we have defined many Business Metrics and we are trying to map them to System level metrics. One key area for us is Oracle. I was wondering is there is a best practice guide out there for SLA's when dealing with Oracle or even better, System Level Metric best practices.
    Any help would be ideal please.

    Hi
    Can you also include the following in the FAQ?
    1) ODP.NET if installed prior to this beta version - what is the best practice ? De-install it prior to getting this installed etc ..
    2) As multiple Oracle home's have become the NORM these days - this being a Client only should probably be non-intrusive and non-invasive.. Hope that is getting addressed.
    3) Is this a pre-cursor to the future happenings like some of the App-Server evolving to support .NET natively and so on??
    4) Where is BPEL in this scheme of things? Is that getting added to this also so that Eclipse and .NET VS 2003 developers can use some common Webservice framework??
    Regards
    Sundar
    It was interesting to see options for changing the spelling of Webservice [ the first one was WEBSTER]..

  • Best practice on monitoring Endeca health / defining outage

    (This is a double post from the Endeca Experience Management forum)
    I am looking for best practice on how to define Endeca service outage and monitor the health of the system. I understand this depends on your user requirements and it may vary from customer to customer. Specifically what criteria do you use to notify your engineer there is a problem? We have our load balancers pinging dgraphs on an interval. However the ping operation is not sufficient in our use case. We are also experimenting running a "low cost" query to the dgraphs on an interval and using some query latency thresholds to determine outage. I want to hear from people on the field running large commercial web site about your best practice of monitoring/notifying health of the system.
    Thanks.

    The performance metric should help to analyse the query and metrics for fine tuning.
    Here are few best practices:
    1. Reduce the number of components per page
    2. Avoid complex LQL queries
    3. Keep the LQL threshold small
    4. Display the minimum number of columns needed

  • Best Practice for ASA Route Monitoring Options?

    We have one pair Cisco ASA 5505 located in different location and there are two point to point links between those two locations, one for primary link (static route w/ low metric) and the other for backup (static route w/ high metric). The tracked options is enabled for monitoring the state of the primary route. the detail parameters regarding options as below,
    Frequency: 30 seconds               Data Size: 28 bytes
    Threshold: 3000 milliseconds     Tos: 0
    Time out: 3000 milliseconds          Number of Packets: 8
    ------ show run------
    sla monitor 1
    type echo protocol ipIcmpEcho 10.200.200.2 interface Intersite_Traffic
    num-packets 8
    timeout 3000
    threshold 3000
    frequency 30
    sla monitor schedule 1 life forever start-time now
    ------ show run------
    I'm not sure if the setting is so sensitive that the secondary static route begins to work right away, even when some small link flappings occur.
    What is the best practice to set those parameters up in the production environment. How can we specify the reasonanble monitoring options to fit our needs.
    Thank you for any idea.

    Hello,
    Of course too sensitive might cause failover to happen when some packets get lost, but remember the whole purpose of this is to provide as less downtime to your network as possible,
    Now if you tune these parameters what happen is that failover will be triggered on a different time basis.
    This is taken from a cisco document ( If you tune the sla process as this states, 3 packets will be sent each 10 seconds, so 3 of them need to fail to SLA to happen) This CISCO configuration example looks good but there are network engineers that would rather to use a lower time-line than that.
    sla monitor 123
    type echo protocol ipIcmpEcho 10.0.0.1 interface outside
    num-packets 3
    frequency 10
    Regards,
    Remember to rate all of the helpful posts ( If you need assistance knowing how to rate a post just let me know )

  • Typical metric thresholds and patterns for monitoring Exadata

    I’m looking for any best practices or a list of recommended settings for the following:
    .- Metric Threshold settings to manage Exadata with OEM12c.
    .- List of main and/or typical metrics used for setting up alerts in OEM12c for Exadata.
    Thanks in advance,
    Carlos.

    Hello Ravi,
    This is a 10.2.0.4 (4nodes) Rac on Linux.
    This a alert text:
    Host=WEUSRV011.intrum.net
    Target type=Database Instance
    Target name=ie_colldesk_iecolld1
    Categories=Performance
    Message=Metrics "Global Cache Average Current Get Time" is at 0.615
    Severity=Warning
    Event reported time=Feb 25, 2013 9:44:05 PM CET
    Target Lifecycle Status=Production
    Comment=WEU Oracle Production Hardware
    Operating System=Linux
    Platform=x86_64
    Event Type=Metric Alert
    Event name=rac_global_cache:currentgets_cs
    Metric Group=Global Cache Statistics
    Metric=Global Cache Average Current Block Request Time (centi-seconds)
    Metric value=0.615384615384615
    Key Value=SYSTEM
    Rule Name=Locks_Rule,rule 96
    Rule Owner=A_GUTIERREZ
    Update Details:
    Metrics "Global Cache Average Current Get Time" is at 0.615
    And
    Host=tstcolldesk01.intrum.net
    Target type=Database Instance
    Target name=COLLDESK_COLLDESK1
    Categories=Performance
    Message=Metrics "Global Cache Average Current Get Time" is at 0.632
    Severity=Warning
    Event reported time=Feb 25, 2013 9:03:00 PM CET
    Comment=WEU Oracle test Environment
    Operating System=Linux
    Platform=x86_64
    Event Type=Metric Alert
    Event name=rac_global_cache:currentgets_cs
    Metric Group=Global Cache Statistics
    Metric=Global Cache Average Current Block Request Time (centi-seconds)
    Metric value=0.631578947368421
    Key Value=SYSTEM
    Rule Name=Locks_Rule,rule 96
    Rule Owner=A_GUTIERREZ
    Update Details:
    Metrics "Global Cache Average Current Get Time" is at 0.632
    The metrics definition is:
    Global Cache Average Current Block Request Time (centi-seconds)
    Global Cache Average CR Block Request Time (centi-seconds)
    And the metrics values defined at template level are:
    Warning Threshold 1.2
    Critical Threshold 3
    Comparison Operator >
    Occurrences Before Alert 3
    Corrective Actions None
    I need to explore select * from dba_thresholds.
    Thanks
    Best regards
    Arturo

  • Best practice for Video over IP using ISDN WAN

    I am looking for the best practice to ensure that the WAN has suffient active ISDN channels to support the video conference connection.
    Reliance on load threshold either -
    Takes to long for the ISDN calls to establish causing the problems for video setup
    - or is too fast to place additional ISDN calls when only data is using the line
    What I need is for the ISDN calls to be pre-established just prior to the video call. Have done this in the past with the "ppp multilink links minimum commmand but this manual intervention isn't the preferred option in this case
    thanks

    This method is as secure as the password: an attacker can see
    the hashed value, and you must assume that they know what has been
    hashed, with what algorithm. Therefore, the challenge in attacking
    this system is simply to hash lots of passwords until you get one
    that gives the same value. Rainbow tables may make this easier than
    you assume.
    Why not use SSL to send the login request? That encrypts the
    entire conversation, making snooping pointless.
    You should still MD5 the password so you don't have to store
    it unencrypted on the server, but that's a side issue.

  • Best Practice: Configuring Windows Azure Management Services

    I have a 3 Websites, 1 Blob Storage, and 1 SQL Server that I would like to configure for basic stability and performance monitoring. I know I can set up alerts through Management Services based on various metrics. My question is, can someone give me a recommended
    set of metrics that are good baselines?
    It is nice that Azure is so customizable, but frankly I have no idea how much CPU Time in milliseconds over a given evaluation window is appropriate. Or how many Http Server Errors? More than 0 seems bad, no? Wouldn't I want to know of any/all errors?
    So if anyone has some "best practice" metrics for me, that would be really helpful.
    Thanks.

    Hi,
      >> can someone give me a recommended set of metrics that are good baselines?
    Actually, many metrics depend on your scenario. For instance, if there're a lot of concurrent requests or if a single request is expected to take some heavy computation, then it is expected to have a high CPU usage, thus it is difficult to give
    you a specific number.
    In general, you may want the CPU usage of a web server to be as high as possible (idle CPU costs money but does not provide valuable results), but if it is low enough, if additional concurrent requests are received, they can be served without too much
    delay. In Windows Azure, you may want to setup auto scaling so that if CPU usage is high enough during a period, you create a new instance. If CPU usage is low enough during a period, you remove an instance. You may also want to use response time in addition
    to CPU to monitor whether you need to add/remove an instance.
      >> Or how many Http Server Errors? More than 0 seems bad, no? Wouldn't I want to know of any/all errors?
    As for server error, in general you want to get notified by all errors (> 0), however they're unexpected and need to be investigated. But if in your scenario you expect a certain level of server errors, then it is fine to use a larger number.
    Best Regards,
    Ming Xu
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Best practices for gathering statistics in 10g

    I would like to get some opinions on what is considered best practice for gathering statistics in 10g. I know that 10g has auto statistics gathering, but that doesn't seem to be very effective as I see some table stats are way out of date.
    I have recommended that we have at least a weekly job that generates stats for our schema using DBMS_STATS (DBMS_STATS.gather_schema_stats). Is this the right approach to generate object stats for a schema and keep it up to date? Are index stats included in that using CASCADE?
    Is it also necessary to gather system stats? I welcome any thoughts anyone might have. Thanks.

    Hi,
    Is this the right approach to generate object stats for a schema and keep it up to date? The choices of executions plans made by the CBO are only as good as the statistics available to it. The old-fashioned analyze table and dbms_utility methods for generating CBO statistics are obsolete and somewhat dangerous to SQL performance. As we may know, the CBO uses object statistics to choose the best execution plan for all SQL statements.
    I spoke with Andrew Holsworth of Oracle Corp SQL Tuning group, and he says that Oracle recommends taking a single, deep sample and keep it, only re-analyzing when there is a chance that would make a difference in execution plans (not the default 20% re-analyze threshold).
    I have my detailed notes here:
    http://www.dba-oracle.com/art_otn_cbo.htm
    As to system stats, oh yes!
    By measuring the relative costs of sequential vs. scattered I/O, the CBO can make better decisons. Here are the data items collected by dbms_stats.gather_system_stats:
    No Workload (NW) stats:
    CPUSPEEDNW - CPU speed
    IOSEEKTIM - The I/O seek time in milliseconds
    IOTFRSPEED - I/O transfer speed in milliseconds
    I have my notes here:
    http://www.dba-oracle.com/t_dbms_stats_gather_system_stats.htm
    Hope this helps. . . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Bandwidth Utilization Avg or Max for capacity Planning best practice

    Hello All - This is a conceptual or Non-Cisco product question. Hope you can help me to get this best industry practice
    I am doing a Capacity planning for the WAN Link Bandwidth. To study the last month bandwidth utilization in the MRTG graph, i am seeing  two values
    Average
    Maximum.
    To measure how much bandwidth my remote location is using which value i have to use. Average or Max?
    Average is always low eg. 20% to 30%
    Maximum is continuous 100% for 3 hour in 3 different intervals in a day and become 60% in rest of the day
    What is the best practice followed in the networking industry to derive the upgrade size of the bandwidth by using the Utilization graph
    regards,
    SAIRAM

    Hello.
    It makes no sense to use average during whole day (or a month), as you do the capacity management to avoid business impact due to link utilization; and average does not help you to catch is the end-users experience any performance issues.
    Typically your capacity management algorithm/thresholds are dependent on traffic patterns. As theses are really different cases if you run SAP+VoIP vs. youtube+Outlook. If you have any business critical traffic, you need to deploy QoS (unless you are allowed to increase link bandwidth infinitely).
    So, I would recommend to use 95-percentile of maximum values on 5-15 minutes interval (your algorithm/thresholds will be really sensitive to pooling interval, so choose it carefully). After to collect baseline (for a month or so)  - go and ask users about their experience and try to correlate poor experience with traffic bursts. This would help you to define thresholds for link upgrade triggers.
    PS: proactive capacity management includes link planning for new sites and their impact on existing links (in HQ and other spoke).
    PS2: also I would recommend to separately track utilization during business hours (business traffic) and non-business (service or backup traffic).

  • Oracle Statistics - Best Practice?

    We run stats with brconnect weekly:
    brconnect -u / -c -f stats -t all
    I'm trying to understand how some of our stats are old or stale.  Where's my gap?  We are running Oracle 11g and have Table Monitoring set on every table.  My user_tab_modifications is tracking changes in just over 3,000 tables.  I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Plus, we have our DBSTATC entries.  A lot of those entries were last analyzed some 10 years ago.  Does the above brconnect consider DBSTATC at all?  Or do we need to regularly run the following, as well?
    brconnect -u / -c -f stats -t dbstatc_tab
    I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    SQL> select count(*) from dba_tab_statistics
      2  where owner = 'SAPR3' and stale_stats = 'YES';
      COUNT(*)
          1681
    I realize that stats last analyzed some ten years ago does not necessarily mean they are no longer good but I am curious if the weekly stats collection we are doing is sufficient.  Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?

    Hi Richard,
    > We are running Oracle 11g and have Table Monitoring set on every table.
    Table monitoring attribute is not necessary anymore or better said it is deprecated due to the fact that these metrics are controlled by STATISTICS_LEVEL nowadays. Table monitoring attribute is valid for Oracle versions lower than 10g.
    > I believe that when those entries surpass 50% changed, then they will be flagged for the above brconnect to update their stats.  Correct?
    Correct, if BR*Tools parameter stats_change_threshold is set to its default. Brconnect reads the modifications (number of inserts, deletes and updates) from DBA_TAB_MODIFICATIONS and compares the sum of these changes to the total number of rows. It gathers statistics, if the amount of changes is larger than stats_change_threshold.
    > Does the above brconnect consider DBSTATC at all?
    Yes, it does.
    > I've got tables that are flagged as stale, so something doesn't seem to be quite right in our best practice.
    The column STALE_STATS in view DBA_TAB_STATISTICS is calculated differently. This flag is used by the Oracle standard DBMS_STATS implementation which is not considered by SAP - for more details check the Oracle documentation "13.3.1.5 Determining Stale Statistics".
    The GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS procedures gather new statistics for tables with stale statistics when the OPTIONS parameter is set to GATHER STALE or GATHER AUTO. If a monitored table has been modified more than 10%, then these statistics are considered stale and gathered again.
    STALE_PERCENT - Determines the percentage of rows in a table that have to change before the statistics on that table are deemed stale and should be regathered. The valid domain for stale_percent is non-negative numbers.The default value is 10%. Note that if you set stale_percent to zero the AUTO STATS gathering job will gather statistics for this table every time a row in the table is modified.
    SAP has its own automatism (like described with brconnect and stats_change_threshold) to identify stale statistics and how to collect statistics (percentage, histograms, etc.) and does not use / rely on the corresponding Oracle default mechanism.
    > Any best practices for me to consider?  Is there some kind of onetime scan I should do to check the health of all stats?
    No performance issue? No additional and unnecessary load on the system (e.g. dynamic sampling)? No brconnect runtime issue? Then you don't need to think about the brconnect implementation or special settings. Sometimes you need to tweak it (e.g. histograms, sample sizes, etc.), but then you have some specific issue that needs to be solved.
    Regards
    Stefan

  • Slow starup of Java application - best practices for fine tuning JVM?

    We are having problems with a java application, which takes a long time to startup.
    In order to understand our question we better start with some background info. You will find the question(s) after that.
    Background:
    The setup is as follows:
    In a client-server solution we have a win xp, fat client running java 1.6.0.18.
    (Sun JRE). The fat client containt a lot of GUI, and connects to a server for DB access. Client machines are typical 1 to 3 years old (there are problems even on brand new machines). They have the client version of JRE - standard edition installed (Java SE 6 update 10 or better) Pretty much usual stuff so far.
    We have done a lot of profiling on the client code, and yes we have found parts of our own Java code that needs improving. we are all over this. Server side seems ok with good response times. So far, we havent found anything about shaky net connections or endless loops in the java client code or similiar.
    Still, things are not good. Starting the application takes a long time. too long.
    There are many complicating factors, but here is what we think we have observed:
    There is a problem with cold vs. varm starts of the application. Apparently, after a reboot of the client PC - things are really, really bad - and it takes (sometimes) up to 30-40 secs to start the application (until we arrive at the start GUI in our app).
    If we run our application, close it down, and then restart
    without rebooting, things are a lot better. It then usually takes
    something like 15 - 20 sec. which is "acceptable". Not good, but acceptable,
    Any ideas why?
    I have googled it, and some links seems to suggest that the reason could be disk cache. Where vital jar are already in disk cache on th warm start? Does that make any sense? Virus scanners presumable runs in both cases.
    People still think that 15 - 20 sec in start up on the warm start is an awful long time, even though there is a lot, a lot, of functionality in the application.
    We got a suggestion to use IBMs JRE - as it can do some tricks (not sure what) our SUN JRE cant do concerning the warm and cold start problem. But thats is not an option for us. And noone has come up with any really good suggestions with the SUN JRE so far?
    On the Java Quick Starter (JQS) -
    improves initial startup time for most java applets and applications.
    Which might be helpful? People on the internet seem more interested
    in uninstalling the thing than actually installing it though?
    And it seems very proprietary, where we cant give our Jar files to it?
    We could obviously try to "hide" the problem in some way and make it "seem" quicker. Where perceived performance can be just as good as actual performance. But it does seem a bad solution. So for the cold start we will probably try reading the jar files and thereby have them in disk cache before startup of our application. And see if that helps us.
    Still, ok the cold start is the real killer, but warm start isn't exactly wonderfull either.
    People have suggested that we read more on the JVM and performance.
    java.sun.com.javase/technologies/performance.jsp
    java.sun.com.docs/hotspot/gc5.0/gc_tuning_5.html
    the use of JVM flags "-Xms" "-Xmx" etc etc.
    And here comes the question .. da da ...
    Concerning various suggested reading material.
    it is very much appreciated - but we will like to ask people here - if it is possibe to get more specific pointers. to where the gold might be buried.
    I.e. in a an ideal world we would have time to read and understand all of these documents in depth. However, in this less than ideal world we are also doing a lot of very timeconsuming profiling in our own java code.
    E.g. java garbage collection is is a huge subject - and JVm settings also. Sure, in the end we will probably have to do this all very thoroughly. But for now we are hoping for some heuristics on what other people are doing when facing a problem like ours..?
    Young generation, large memory pages, garbage collection threads ect. all sounds interesting - but what would you start with?
    If you don't have info to decide - what kind of profiling would you be running and then adjust what JVM setting in your trials?
    In this pressed for time scenario. Ignorance is not bliss. But makes it hard to pinpoint the or those JVM parameters to adjust. So some good pointers from experienced JVM "configurators" will be much appreciated!
    Actually, If we can establish that finetuning of these parameters is a good idea, it will certainly also be much easier to allocate the time for doing so. - reading, experimenting etc. in our project.
    So, All in all , what kinds of performance improvements can we hope for? 5 out of 20 secs on the warm start? Or is it 10 % nitpicking? Whats the ball park figure for what we can hope to achieve here given our setup? What do you think based on above?
    Maybe someone out there have done some finetuning of JVM parameters in a similiar PC environments like, with similiar fat clients...? Finetuning so and so - gave 5 secs. So start your work with these one-two parameters?
    Something like that - some best practices? Thats what we are hoping for.
    best wishes
    -Simon

    Thanks for helpful answer from both you and kajbj.
    The app doesn't use shared network drives.
    What are you doing between main starts to get executed and the UI is
    displayed?
    Basicly, Calculating what to show in the UI. Accessing server - not so much, there are some reads from a cache, but the profiling doesnt indicate that it should be a problem. Sure, I could shift the startup time to some other slot, but sofar I havent found a place where the end-user wouldnt be annoyed.> Caching of something would seem most obvious. Normal VM stuff >seems unlikely. With profiling i basicly find that ''everything'' takes a lot longer in the cold start scenario. Some of our local java methods are going to be rewritten following our review. But what else can be tuned?You guys dont think the Java Quick Start approach, with more jars in disk cache will give something? And how should that be done/ what does people do?I.e. For the class loader I read something about
    1.Bootstrap class loader
    2.Extensions class loader
    3.System class loader
    and is wondering if this has something to do with the cold start problem?
    The extensions class loader loads the code in the extensions directories (<JAVA_HOME>/lib/ext
    So, we should move app classes to ext? Put them in one jar file? (We have many). Best practice about that?
    Otherwise it seems to me that it must be about finetuning the JVM?
    I imagine that it is a question about:
    1. the right heap size
    2. the right garbage collection scheme
    Googling heap size for XP
    CHE22 writes:
    You are right; -Xms1600M works well, but -Xms1700M bombs
    Thats one best practice or what?
    On garbage collection, there are numerous posts, and much "masters of Java black art" IMHO, And according to profiling GC is not really that much of a problem anyway? Still,
    Based on my description I was hoping for a short reply like "try setting these two parameters on your xp box, it worked for me" ...or something like that. With no takers on that one, I fear people are saying that there is nothing to be gained there?
    we read:
    [ -Xmx3800m -Xms3800m
    Configures a large Java heap to take advantage of the large memory system.
    -Xmn2g
    Configures a large heap for the young generation (which can be collected in parallel), again taking advantage of the large memory system. It helps prevent short lived objects from being prematurely promoted to the old generation, where garbage collection is more expensive.
    Unless you have problems with pauses, try granting as much memory as possible to the virtual machine. The default size (64MB) is often too small.
    Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the virtual machine. On the other hand, the virtual machine can't compensate if you make a poor choice.
    The -XX:+AggressiveHeap+ option inspects the machine resources (size of memory and number of processors) and attempts to set various parameters to be optimal for long-running, memory allocation-intensive jobs]
    So is Setting -Xms and -Xmx and -XX:AggressiveHeap
    best practice? What kind of performance improvement should we expect?
    Concerning JIT:
    I read this one
    [the impact of the JIT compiler is obvious on the graph: at startup the time taken is around 500us for the first few values, then quickly drops to 130us, before falling again to 70us, where it stays for 30 minutes,
    for this specific issue, I greatly improved my performances by configuring another VM argument: I set -XX:CompileThreshold=50]
    The size of the cache can be changed with
    -Xmaxjitcodesize
    This sounds like you should do something with JIT args, but reading
    // We disable the JIT during toolkit initialization. This
    // tends to touch lots of classes that aren't needed again
    // later and therefore JITing is counter-productiive.
    java.lang.Compiler.disable();
    However, finding
    the sweet spots for compilation thresholds has been tricky, so we're
    still experimenting with the recompilation policy. Work on it
    continues.
    sounds like there is no such straigth forward path, it all depends...
    Ok, its good, when
    [Small methods that can be more easily analyzed, optimized, and inlined where necessary (and not inlined where not necessary). Clearly delineated uses of data so that usage patterns and lifetimes are apparent. ]
    but when I read this:
    [The virtual machine is responsible for byte code execution, storage allocation, thread synchronization, etc. Running with the virtual machine are native code libraries that handle input and output through the operating system, especially graphics operations through the window system. Programs that spend significant portions of their time in those native code libraries will not see their performance on HotSpot improved as much as programs that spend most of their time executing byte codes.]
    I have the feeling that we might not able to improve performance that way?
    Any comments?
    otherwise i was wondering about
    -XX:CompileThreshold=50 -Xmaxjitcodesize (large, how large?)
    Somehow, we still feel that someone out there should have experienced similiar problems? But obviously there is no guarantee that the someone should surf by here!
    In c++ we used to just write everything ourselves. Here it does seem to be a question about the right use of other peoples stuff?
    Where you are kind of hoping for a shortcut, so you dont have to read endless number of documents, but can find a short document that actually addresses your problem ... well.
    -Simon
    Edited by: simoncpm on Mar 15, 2010 3:43 PM
    Edited by: simoncpm on Mar 15, 2010 3:53 PM

  • ES2 best practice for how much stuff should be in one application?

    I'm wondering if there is a best practice/recommended amount of the maximum amount of forms/processes/etc that you should have contained within one application in ES2?  I have an application which has about 5 processes, and has over 300 xdp forms.  When "deploying" the application it takes probably over 5 minutes or longer.  It seems to be working fine but i'm curious if this will cause any problems and if there is a recommended threshold?

    I don't think there is a limit on the number of processes & forms to be used within an application.
    However there is recommendation for not having more than 20 variable in a single process.
    Each process created within you application will become a service. So it doesn't matter having 500 processes in one application or 50 processes in 10 applications. You will endup with 500 services deployed into Java Runtime.
    Forms also doesn't bother about the count as it just stay within repository (not in Java Runtime).
    The only issue with enormous resources within an application is the response time to Deploy to application server (which you already mentioned here).
    So, if you can split your resources into manageable units, that will reduce your checkin/deploy time.
    Nith

  • High performance website, best practices?

    Hello all,
    I'm working on a system with a web service/Hibernate (Java code linking web pages to the database) front-end which is expected to process up to 12,000 transactions per second with zero downtime. We're at the development/demonstration stage for phase 1 functionality but I don't think there has been much of a planning stage to make sure the metrics can be reached. I've not worked on a system with this many transactions before and I've always had downtime where database and application patches can be applied. I've had a quick look into the technologies available for Oracle High Availability and, since we are using 11g with RAC I know we have at least paid for them even if we're not using them.
    There isn't a lot of programming logic in the system (no 1000-line packages accessing dozens of tables, in fact there are only about 20 tables) and there are very few updates. It's mostly inserts and small queries getting a piece of data for use in the front-end.
    What I'd like to know is the best practice development for this type of system. As far as I know, the only person on the team with authority and an opinion on technical architecture wants to use the database as a store of data and move all the logic into the front-end. The thinking behind this is
    1) it's easier to load balance or increase capacity in the front-end
    2) the database will be the bottleneck in the system so should have as little demand placed on it as possible
    3) pl/sql packages cannot always be updated without downtime (I'm not sure if this is true or if it can be managed -- the concern is that packages become invalid whilst the upgrade script is running -- or how updates in the front-end could be managed any better, especially if they need to be coordinated with changes to tables)
    4) reference tables can be cached in the front-end to cut down on data access
    Views please!

    Couple of thoughts
    - Zero downtime (Or at least very close to it) can be acheivable, but there is a rapidly diminishing return on cost in squeezing the last few percent out of uptime, if you can have the odd planned maintenance window then you can make your life a lot easier.
    -If you decide ahead of time that the database is going to be the bottleneck, then it probably will be!
    -I can understand where they are coming from with their thinking, the web tier will be easier to scale out, but eventually all that data still needs to get into the database. The database layer is where you need to start the design to get the most out of the platform. Can it handle 12,000 TPS? If it can't then it doesn't matter how quickly your application layer can service those requests.
    -If this is mainly inserts, could these be queued in somesort of message queue? Allow the clients to get an instant (Well almost) 'Done' confirmation, where the database will be eventually consistent? Very much depends on what this is being used for of course but this could help with both the performance (At east the 'percieved' performance) and the uptime requirement.
    - Caching fairly static data sounds like a good idea to me.
    Carl

  • Multiple IPs and Outbound IP on 2008, best practice suggestion...

    Hello,
    I need a suggestion on an issue;
    I have a Windows 2008 R2 SP1 Std. Ed. I have 3 IPs for that server, each of them uses the same gateway. By design the IP which is closest to the gateway is the default outbound IP on W2K8_R2_SP1_SE.
    I want to choose any other IP out of other 2 assigned IPs as default outbound one.
    example:
    GATEWAY: 10.0.0.1
    IP1: 10.0.0.2 (default outbound by design)
    IP2: 10.0.0.3 (the one I want it to be default outbound)
    IP3: 10.0.0.4 (not important)
    There are basically 2 choices available to me doable right now. Can you please take a moment and suggest one of the solutions below or state if you know the best practice for such a case? Thank you very much in advance =)
    First Solution:
    apply this command: Netsh int ipv4 add address 12 10.0.0.1 255.x.x.x skipassource=true
    then apply these 3 hotfixes:
    IP addresses are still registered on the DNS servers even if the IP addresses are not used for outgoing traffic on a computer that is running Windows 7 or Windows Server 2008 R2
    http://support.microsoft.com/kb/2386184
    The "skipassource" flag of IP addresses is cleared after you use the GUI to change IP settings of a network adapter in Windows 7 or in Windows Server 2008 R2
    http://support.microsoft.com/kb/2554859
    FIX: IIS Manager does not display IP addresses that are assigned to the network adapter together with the skipassource flag
    http://support.microsoft.com/kb/2551090
    Second Solution:
    Simply create 2 interfaces. Use the first one with the IP that I want to be as outbound default, dump all other IPs to the second interface. 2 interfaces will have the same gateway but Windows will assume the first one as the outbound default.

    I believe you want to set the metric on the interfaces.
    You can do this by altering your routing table with
    route.exe or alternatively, you can change the interface metric in the TCP/IP advanced properties for your network adapter (via Control Panel). By default it uses an automatic metric (i.e. Windows chooses which interface to use).
    For your reference (and the reference of anyone else facing a similar challenge), the metric is a weighted value Windows will use to determine which interface to use for a particular endpoint. Here is the definition from the route.exe documentation:
    metric   Metric   : Specifies
    an integer cost metric (ranging from 1 to 9999) for the route, which is used when choosing among multiple routes in the routing table that most closely match the destination address of a packet being forwarded. The route with the lowest metric is chosen. The
    metric can reflect the number of hops, the speed of the path, path reliability, path throughput, or administrative properties.
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights

Maybe you are looking for

  • Workflow Action to update Picklist field

    Hi, I created a workflow. I want the workflow to do the following. Workflow Conidtion* If Condition is PM20 Bid Decision or PM20 No Bid then perform workflow actions. [<plMileston_ITAG>]=LookupValue("OCC_CUST_LOV_OPTY_6", "PM020 Bid Decision") OR [<p

  • How to view the results of progress analysis

    Dear Experts, I executed progress analysis for my project analysis in CNE1 for my project. After execution, PA has been completed for two WBS elements. But iam not able to see any POC values or costs. Where should I check and confirm that Progress an

  • Modifying a csv or text file

    Guys, does any of you know how to modify a csv or text file? Let's say I got a file containing the following line: "TV", "85 - <b>Black and White</b>", "Boston" Now let's say I wanna change the above line to the line below: Amout: 85 || Color: Black

  • Image video verte saccadée avec internet explorer windows 8.1

    Bonjour, Ma question est résumée  sous objet ci-haut. Certaines vidéos, surtout des chaines de TV ont une image verte et saccadée avec Internet Explorer. J'ai le même problème avec Mozilla Firefox. Avec Google Chrome ça marche. Les mises à jour W8.1

  • Need help in installing Oracle 9i Release 2 in RHLE3

    Hi All I need some patches to Install Oracle 9i (9.2.0.1.0) in RHEL3. I know that they are in metalink, but I don't have login to metalink site... I need some patche files... p3006854_9204_LINUX.zip p3095277_9204_LINUX.zip p3119415_9204_LINUX.zip p26