Best Practices Java 1.6.0.24 Runtime parameters

Hi,
We're deploying Java 1.6.0.24 for 300 end-users on ERP 11.5.10 CU2 on Linux.
We would like to know what are the Best Practices for Java 1.6.0.24 Runtime parameters.

Please also see these docs.
Recommended Client Java Plug-in (JVM/JRE) For Discoverer Plus 10g (10.1.2) [ID 465234.1]
Diagnosing Forms Mouse Focus Problems Using JRE [ID 760250.1]
Login Loop on Internet Explorer after Session Timeout using JRE 1.6.0_18 [ID 1078228.1]
How Are The Forms JAR Files Stored with Sun JRE [ID 1058882.1]
Thanks,
Hussein

Similar Messages

  • Best practice for calling an AM method with parameters

    Which will be the best way to call an AM method with parameters from a backing bean.
    I usually use the BindingContainer to get the operation binding and then call execute function. But when the method have parameters, how to do it?
    Thanks

    Hi,
    same:
    operationBinding.getParamMap().put("argument1Name", argument1Value);
    operationBinding.getParamMap().put("argument2Name", argument2Value);
    operationBinding.execute();
    Frank

  • Best Practice Needed: Global Application Properties...

    Hi All,
    When developing a web-based application that reads certain configurable parameters from .properties files, I usually put the appropriate code in a static block in the appropriate Java class, storing the property in a static final constant. For example, a DBConnection class might have a static block that reads the driver, username, and password from a properties file.
    My question is, what are some "best practice" techniques for accessing and storing such parameters for use in an application? Are all global properties initialized in one class? at the same time? only when first needed?

    over all, I would say that your approach is fine. Personally, I load properties through a single class, some thing like PropertyReader, and have the different classes initialize their static fields via a get method on that class, like getProperty("db.user"). I prefer to load them via a single class because I can place all of my IO trapping in one location, it is easier to implement new security measures and, if necessary, easier to support internationalization.
    I initialize all properties once, at startup, into a Wrapper Object, typically ResourceBundle or Properties (although Hashtable or any some thing else would be suitable). I believe that it is best to initialize all properties at the same time, at startup because the costs of storing properties that may not be used is going to be less then the cost of making multiple IO calls to load properties on a need-by-need basis. In other words, you are almost always going to take a bigger performance hit by loading properties only when a request for that key is received, rather then just loading them all at once.

  • SAP ASE Best Practice latest update

    Hello experts,
    just wondering if somebody already reviewed thoroughly latest guide for best practices on SAP Sybase ASE?
    I am talking about the document from note 1680803 - SYB: Migration to SAP Adaptive Server Enterprise - Best Practice (former note 1722359 - SYB: Running SAP applications on SAP ASE - Best Practice).
    The guide for normal runtime operation was merged with the guide for migration, but there are some contradictory statements.
    Apart from that the study case is again designed for server with huge memory and lot of CPU cores (so not so real case normally, I wonder who setup so often such huge servers...), I have found some inconsistencies.
    E.g. in part "Reconfigure Engines and Parallel Processing", they talk about to limit ASE engines to 16, but the command configures 32.
    alter thread pool syb_default_pool with thread count = 32, idle timeout = 2000
    No change to the previous setup for migration. Is this just typo? I understand it should be 16, and then also number of network tasks for normal operation would be 4 (as mentioned in the beginning of guide that normally you set up 1 per 3-4 engnes). If this is not typo, then number of network tasks is wrong as it should be 8.
    Also they introduced idle timeout, but only talking about ERP and possible lower value for Solman - does this mean that for BW you keep default value (which if I am not mistaken is 100)? As per ADM540 you should even decrease this timeout when SAP system is sharing server with database - I know that document is old, but is again contradictory, not saying that it is wrong, but not well explained.
    If anybody checked new version of guide, please let me know, I think it is bit messed up and is bit difficult to distinguish what you should set up for migration case and what for normal operation case.
    Thanks!
    Regards,
    Matus

    Actually, quite a few customers run with that many engines/memory.   In fact, it is difficult these days to even buy a server with less than 128GB of memory and 16 cores/32 threads.    Pretty much the only time we see less is when the install is in a VM.   Interestingly, we had comments from the first version suggesting the numbers were not realistic given the typical size of systems being deployed were much larger....    In addition, in my experience with customers on SAP systems, they were not aware of how  much memory was necessary to really support medium to large systems based on the configurations they were attempting.
    I am sorry that you feel some of the examples are contradictory.  You are correct in pointing out that the text refers to 16 engines and the example configures 32....   So yes, for that specific example, it should have been 16. 
    Secondly, not having seen ADM540, but I think there is a bit of a problem if they suggest that.   I my opinion (and I have spent a lifetime tuning ASE), the idle timeout for ERP and BW should likely both be 1000+ and 2000 is not unreasonable.   The comment in ADM540 is likely due to if ASE and a NW CI are sharing the same cores - e.g. you have a 4 core box and ASE is running on 2 cores (we will ignore threads for this discussion) and you have 30 NW worker processes - which obviously will need to bump ASE off the cpu in order to run.   This may be fine in a test/dev or even a solution manager system, but bumping ASE off the core is NOT a good thing for a production system.  In fact, I would encourage using numactl or similar to fence off the the cores used for ASE from NW worker processes if at all possible.   We have seen cases of overloaded NW installations with multiple CI instances with hundreds of worker processes each starving cpu away from ASE......sooo....I would tend to actually be a bit more than firm on suggesting that 100 is a very bad starting point.   Given the number of client side joins that SAP uses to avoid [DBMS proprietary] temp tables, it is critical that ASE's (or any DBMS) response time be minimized as much as possible.....having ASE yield the core practically as soon as it gets done processing one task (and puts it to sleep pending an IO) just really causes things to run slow.   Think of a typical query that returns 10 rows - say wide enough that each row fills 1 packet.   If the packet transmit time (and client ACK) takes more than 100 microseconds on CPU (almost a given for network interactions...as clock ticks are in nanoseconds and networking is minimally milliseconds - 1000 microseconds), ASE would yield the CPU every time it sent a packet.    When the client wanted the next packet, the OS would have to wake up the ASE process (an interrupted sleep) which is a nasty heavy weight operation.   Hence it is best for ASE to hang out on the CPU until reasonably sure that nothing more is going to happen very soon....and on current cpus...and having it run for 1-2ms (1000-2000 microseconds) shouldn't be a hardship.     If you created a separate thread pool for batch worker processes, then I could see maybe using a lower idle timeout such as 200 or 250......100 is just plain too low in my mind...it is like saying ASE is expecting an odd query every few seconds vs. a steady workload.  Basically at that level, there had better be a task in the ASE job queue or one on the way on the network already, or that engine is going to sleep.
    While I state that with regards to ADM540 itself, I have not seen the class (perhaps)...one customer did show me the notebook of a class (ASE Sys Admin) they went to and it was really targeted at non-SAP installations more than SAP installations - from a reality/experience aspect.   Part of the issue with the class the customer showed me was it borrowed liberally from the old SY classes as a starting point, but at the point the class was developed there was not a lot of experience with running SAP installations on ASE to really point out the fine tweaking areas such as idle timeout.
    However, the document was really aimed primarily at Business Suite vs. BW systems or a Solution Manager install (which are much smaller) - there are a lot of other considerations for BW the guide doesn't get into - although some of the sizing is a better start than the defaults provided by SAPINST
    The former runtime guide essentially was just merged in to the Post-Migration Steps section.
    May do a quick refresh in the near-future (due to some recent experiences), so if you have other specific examples of the text and SQL not aligning - please let me know.

  • Coldfusion, MS SQL, Hash Best Practices,...

    Hello,
    I am trying trying to store hashed data (user password) in an
    ms sql database; the datatype in the database is set to varbinary.
    I get a datatype conflict when trying to insert the hashed data. It
    works when the datatype in the database is set to varchar.
    I understand that you can set your hash function with
    arguments that will convert the data before sending to the
    database, but I am not clear on how this is done. Now, along with
    any assistance with the conversion, what exactly is the best
    practice for storing the hash data? Should I store as varcahar or
    varbinary? Of course, if varchar I won't have the problem, but I am
    interested in best practices as well.
    Thnx

    brwright,
    I suggest parameterizing your queries to add protecting from
    injection.
    http://livedocs.adobe.com/coldfusion/6.1/htmldocs/tags-b20.htm
    hashing is best suited for passwords because the encryption
    is one way, once encrypted using hash() it can't be decrypted.
    Other fields that you might want to encrypt and still have the
    ability to decrypt, you can use the encrypt() and decrypt()
    functions.
    http://livedocs.adobe.com/coldfusion/6.1/htmldocs/functi75.htm
    I think there are also new encryption functions available in
    coldfusion 8...

  • Best Practices for Defining NDS Java Projects...

    We are doing a Proof of Concept on using NDS to develop non-SAP Java applications.  We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
    We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects.  For example, what is and when do you define Tracks, Software Components, Development Components, etc.  All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see).  We are also struggling with how the DTR and activities tie in to those components.
    If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance.  This is a very frustrating and time-consuming issue for us.
    Thank you!!

    Hi Peggy,
    In Component Model we divide software projects into small components.Components can use other components in well defined manner.
    A development object is a part of a component that can be changed or developed in some way; it provides the component with a certain part of its functionality. A development object may be a Java class, a Web Dynpro view, a table definition, a JSP page, and so on. Development objects are always stored as “sources” in a repository.
    A development component can be defined as a frame shared by a number of objects, which are part of the software.
    Software components combine components (DCs) to larger units for delivery and deployment.
    A track comprises configurations and runtime systems required for developing software component versions.It ensures stable states of deliverables used by subsequent tracks.
    The Design Time Repository is for versioning source code management. Distributed development of software in teams. Transport and replication of sources.
    You can also find lot of support in SDN for the above concepts with tutorials.
    Refer this Link for a overview on Java development Infrastructure(JDI)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/java development infrastructure jdi overview.pdf
    To understand further
    Working with Net Weaver Development Infrastructure :
    http://help.sap.com/saphelp_nw04/helpdata/en/03/f6bc3d42f46c33e10000000a11405a/content.htm
    In the above link you can find all the concepts clearly explained.You can also find the required tutorials for development.
    Regards,
    Vijith

  • Where to put java code - Best Practice

    Hello. I am working with the Jdeveloper 11.2.2. I am trying to figure out the best practice for where to put code. After reviewing http://docs.oracle.com/cd/E26098_01/web.1112/e16182.pdf it seemed like the application module was the preferred spot (although many of the examples in the pdf are in main methods). After coding a while though, I noticed that there were quite a few libraries imported, and wondered whether this would impact performance.
    I reviewed postings on the forum, especially Re: Access service method (client interface) programmatically . This link mentions accessing code from a backing bean -- and the gist of the recommendations seems to be to use the data control to drag it to the JSF, or use the bindings to access code.
    My interest lies in where to put java code in the first place; In the View Object, Entity Object, and Am object, backing bean.....other?
    I can outline several best guesses about where to put code and the pros and cons:
    1. In the application module
    Pros: Centralized location for code makes development and support more simple as there are not multiple access points. Much like a data control centralizes services, the application module can act as a conduit for different pieces of code you have in objects in your model.
    Cons: Everything in one place means the application module becomes bloated. I am not sure how memory works in java -- if the app module has tons of different libraries are they all called when even a simple query re-execute method is called? Memory hog?
    2. Write code in the objects it affects. If you are writing code that accesses a view object, write it in a view object. Then make it visible to the client.
    pros: The code is accessed via fewer conduits (for example, I would expect that if you call the application module from a JSF backing bean, then the application module calls the view object, you have three different pieces of code --
    conts: The code gets spread out, harder to locate etc.
    I would greatly appreciate your thoughts on the matter.
    Regards,
    Stuart
    Edited by: Stuart Fleming on May 20, 2012 5:25 AM
    Edited by: Stuart Fleming on May 20, 2012 5:27 AM

    First point here is when you say "where to put the java code" and you're referring to ADF BC, the point is you put "business logic java code" in the ADF Business Components. It's fine of course to have Java code in the ViewController layer that deals with the UI layer. Just don't put business logic in the UI layer, and don't put UI logic in the model layer. In your 2 examples you seem to be considering the ADF BC layer only, so I'll assume you mean business logic java code only.
    Meanwhile I'm not keen on the term best practice as people follow best practices without thinking, typically best practices come with conditions and people forget to apply them. Luckily you're not doing that here as you've thought through the pros and cons of each (nice work).
    Anyway, back on topic and off my soap box, as for where to put your code, my thoughts:
    1) If you only have 1 or 2 methods put it in the AppModuleImpl
    2) If you have hundreds of methods, or there's a chance #1 above will morph into #2, split the code up between the AppModuleImpl, ViewImpl and ViewRowImpls. Why? Because your AM will become overloaded with hundreds of methods making it unreadable. Instead put the code where it should logically go. Methods that work on a specific VO row go into the associated ViewRowImpl, methods that work across rows in a VO go into the ViewImpl, and methods that work across VOs in the associated AppModuleImpl.
    To be honest which you ever option you choose, one thing I do recommend as a best practice is be consistent and document the standard so your other programmers know.
    Btw there isn't an issue about loading lots of libraries/imports into a class, it has no runtime cost. However if your methods require lots of class variables, then yes this will have a memory cost.
    On a side note if you're interested in more ideas around how to build ADF apps correctly think about joining the "ADF EMG", a free online forum which discusses ADF architecture, best practices (cough), deployment architectures and more.
    Regards,
    CM.

  • Best Practice for storing a logon to website in a desktop java app

    Hoping someone well versed in java related security best practices can point me in the right direction.
    I have a small java PC application that uses the Soap API to send various data to a 3rd party.
    Currently I am storing the logon credentials for this 3rd party in a local database used by the application.
    The username / password to connect to this database is encrypted and never accessed in clear text in the code.
    (Although, since the application is stand alone, everything needed to decrypt the database credentials is packaged
    with the application. It would not be easy to get the clear text credentials, but possible)
    The caveat in my case is that the user of the application is not even aware (nor should be) that the application is interacting with
    the 3rd party API at all. All the end user knows is that an entity (that they already have a relationship with) has asked them to
    install this application in order to provide the entity with certain data from the user.
    Is there a more secure way to do this will maintaining the requirement that the user not need know the logon credentials to the 3rd party?

    Moderator advice: Don't double post the same question. I've removed the other thread you started in the Other Security APIs, Tools, and Issues forum.
    db

  • Best practice of OSB logging Report handling or java code using publish

    Hi all,
    I want to do common error handling of OSB I did two implementations as below just want to know which one is the best practice.
    1. By using the custom report handler --> When ever we want to log we will use the report action of OSB which will call the Custom java class which
    Will log the data in to DB.
    2. By using plain java class --> creating a java class publish to the proxy which will call this java class and do the logging.
    Which is the best practice and pros and cons.
    Thanks
    Phani

    Hi Anuj,
    Thanks for the links, they have been helpful.
    I understand now that OSR is only meant to contain only Proxy services. The synch facility is between OSR and OSB so that in case when you are not using OER, you can publish Proxy services to OSR from OSB. What I didn't understand was why there was a option to publish a Proxy service back to OSB and why it ended up as a Business service. From the link you provided, it mentioned that this case is for multi-domain OSBs, where one OSB wants to use the other OSB's service. It is clear now.
    Some more questions:
    1) In the design-time, in OER no Endpoints are generated for Proxy services. Then how do we publish our design-time services to OSR for testing purposes? What is the correct way of doing this?
    Thanks,
    Umar

  • What are Best Practice Recommendations for Java EE 7 Property File Configuration?

    Where does application configuration belong in modern Java EE applications? What best practice(s) recommendations do people have?
    By application configuration, I mean settings like connectivity settings to services on other boxes, including external ones (e.g. Twitter and our internal Cassandra servers...for things such as hostnames, credentials, retry attempts) as well as those relating business logic (things that one might be tempted to store as constants in classes, e.g. days for something to expire, etc).
    Assumptions:
    We are deploying to a Java EE 7 server (Wildfly 8.1) using a single EAR file, which contains multiple wars and one ejb-jar.
    We will be deploying to a variety of environments: Unit testing, local dev installs, cloud based infrastructure for UAT, Stress testing and Production environments. **Many of  our properties will vary with each of these environments.**
    We are not opposed to coupling property configuration to a DI framework if that is the best practice people recommend.
    All of this is for new development, so we don't have to comply with legacy requirements or restrictions. We're very focused on the current, modern best practices.
    Does configuration belong inside or outside of an EAR?
    If outside of an EAR, where and how best to reliably access them?
    If inside of an EAR we can store it anywhere in the classpath to ease access during execution. But we'd have to re-assemble (and maybe re-build) with each configuration change. And since we'll have multiple environments, we'd need a means to differentiate the files within the EAR. I see two options here:
    Utilize expected file names (e.g. cassandra.properties) and then build multiple environment specific EARs (eg. appxyz-PROD.ear).
    Build one EAR (eg. appxyz.ear) and put all of our various environment configuration files inside it, appending an environment variable to each config file name (eg cassandra-PROD.properties). And of course adding an environment variable (to the vm or otherwise), so that the code will know which file to pickup.
    What are the best practices people can recommend for solving this common challenge?
    Thanks.

    HI Bob,
    As sometimes when you create a model using a local wsdl file then instead of refering to URL mentioned in wsdl file it refers to say, "C:\temp" folder from where you picked up that file. you can check target address of logical port. Due to this when you deploy application on server it try to search it in "c:\temp" path instead of it path specified at soap:address location in wsdl file.
    Best way is  re-import your Adaptive Web Services model using the URL specified in wsdl file as soap:address location.
    like http://<IP>:<PORT>/XISOAPAdapter/MessageServlet?channel<xirequest>
    or you can ask you XI developer to give url for webservice and username password of server

  • Best practice to install Bi-Java

    Hi,
    We are in the process of designing the system landscape for our client. We have installed NW04s usage type EP on one server. Now we need to decide on where to install the BI-Java usage type. So:
    1. Is adding BI-Java on the existing EP usage type is recommended
    2. What is the best practice:
      a> EP and BI-Java on the same server
      b> EP and BI-Java on separate servers
    What are the pros and cons of each choice.

    Hi,
    When you install BI JAVA, it will have its own EP and AS JAVA usage types under it. You have no option with this as they are required to run BI JAVA (just like you have AS ABAP under the BI usage type).
    The decisions you do have are:
    1. Should BI JAVA be installed on the same server as the BI usage type? (this is your choice)
    2. Should you use the EP usage type within the BI JAVA for your enterprise portal deployment or just keep it dedicated for BI usage only and use a separate EP usage type for your enterprise portal deployment. (most customers will do the latter option due to patching and upgrading options).
    Please see the SAP NetWeaver 2004s master guide for more information. In addition we are updating the master guide with more information on this area. It should be published quite soon.
    Cheers,
    Mike.

  • Slow starup of Java application - best practices for fine tuning JVM?

    We are having problems with a java application, which takes a long time to startup.
    In order to understand our question we better start with some background info. You will find the question(s) after that.
    Background:
    The setup is as follows:
    In a client-server solution we have a win xp, fat client running java 1.6.0.18.
    (Sun JRE). The fat client containt a lot of GUI, and connects to a server for DB access. Client machines are typical 1 to 3 years old (there are problems even on brand new machines). They have the client version of JRE - standard edition installed (Java SE 6 update 10 or better) Pretty much usual stuff so far.
    We have done a lot of profiling on the client code, and yes we have found parts of our own Java code that needs improving. we are all over this. Server side seems ok with good response times. So far, we havent found anything about shaky net connections or endless loops in the java client code or similiar.
    Still, things are not good. Starting the application takes a long time. too long.
    There are many complicating factors, but here is what we think we have observed:
    There is a problem with cold vs. varm starts of the application. Apparently, after a reboot of the client PC - things are really, really bad - and it takes (sometimes) up to 30-40 secs to start the application (until we arrive at the start GUI in our app).
    If we run our application, close it down, and then restart
    without rebooting, things are a lot better. It then usually takes
    something like 15 - 20 sec. which is "acceptable". Not good, but acceptable,
    Any ideas why?
    I have googled it, and some links seems to suggest that the reason could be disk cache. Where vital jar are already in disk cache on th warm start? Does that make any sense? Virus scanners presumable runs in both cases.
    People still think that 15 - 20 sec in start up on the warm start is an awful long time, even though there is a lot, a lot, of functionality in the application.
    We got a suggestion to use IBMs JRE - as it can do some tricks (not sure what) our SUN JRE cant do concerning the warm and cold start problem. But thats is not an option for us. And noone has come up with any really good suggestions with the SUN JRE so far?
    On the Java Quick Starter (JQS) -
    improves initial startup time for most java applets and applications.
    Which might be helpful? People on the internet seem more interested
    in uninstalling the thing than actually installing it though?
    And it seems very proprietary, where we cant give our Jar files to it?
    We could obviously try to "hide" the problem in some way and make it "seem" quicker. Where perceived performance can be just as good as actual performance. But it does seem a bad solution. So for the cold start we will probably try reading the jar files and thereby have them in disk cache before startup of our application. And see if that helps us.
    Still, ok the cold start is the real killer, but warm start isn't exactly wonderfull either.
    People have suggested that we read more on the JVM and performance.
    java.sun.com.javase/technologies/performance.jsp
    java.sun.com.docs/hotspot/gc5.0/gc_tuning_5.html
    the use of JVM flags "-Xms" "-Xmx" etc etc.
    And here comes the question .. da da ...
    Concerning various suggested reading material.
    it is very much appreciated - but we will like to ask people here - if it is possibe to get more specific pointers. to where the gold might be buried.
    I.e. in a an ideal world we would have time to read and understand all of these documents in depth. However, in this less than ideal world we are also doing a lot of very timeconsuming profiling in our own java code.
    E.g. java garbage collection is is a huge subject - and JVm settings also. Sure, in the end we will probably have to do this all very thoroughly. But for now we are hoping for some heuristics on what other people are doing when facing a problem like ours..?
    Young generation, large memory pages, garbage collection threads ect. all sounds interesting - but what would you start with?
    If you don't have info to decide - what kind of profiling would you be running and then adjust what JVM setting in your trials?
    In this pressed for time scenario. Ignorance is not bliss. But makes it hard to pinpoint the or those JVM parameters to adjust. So some good pointers from experienced JVM "configurators" will be much appreciated!
    Actually, If we can establish that finetuning of these parameters is a good idea, it will certainly also be much easier to allocate the time for doing so. - reading, experimenting etc. in our project.
    So, All in all , what kinds of performance improvements can we hope for? 5 out of 20 secs on the warm start? Or is it 10 % nitpicking? Whats the ball park figure for what we can hope to achieve here given our setup? What do you think based on above?
    Maybe someone out there have done some finetuning of JVM parameters in a similiar PC environments like, with similiar fat clients...? Finetuning so and so - gave 5 secs. So start your work with these one-two parameters?
    Something like that - some best practices? Thats what we are hoping for.
    best wishes
    -Simon

    Thanks for helpful answer from both you and kajbj.
    The app doesn't use shared network drives.
    What are you doing between main starts to get executed and the UI is
    displayed?
    Basicly, Calculating what to show in the UI. Accessing server - not so much, there are some reads from a cache, but the profiling doesnt indicate that it should be a problem. Sure, I could shift the startup time to some other slot, but sofar I havent found a place where the end-user wouldnt be annoyed.> Caching of something would seem most obvious. Normal VM stuff >seems unlikely. With profiling i basicly find that ''everything'' takes a lot longer in the cold start scenario. Some of our local java methods are going to be rewritten following our review. But what else can be tuned?You guys dont think the Java Quick Start approach, with more jars in disk cache will give something? And how should that be done/ what does people do?I.e. For the class loader I read something about
    1.Bootstrap class loader
    2.Extensions class loader
    3.System class loader
    and is wondering if this has something to do with the cold start problem?
    The extensions class loader loads the code in the extensions directories (<JAVA_HOME>/lib/ext
    So, we should move app classes to ext? Put them in one jar file? (We have many). Best practice about that?
    Otherwise it seems to me that it must be about finetuning the JVM?
    I imagine that it is a question about:
    1. the right heap size
    2. the right garbage collection scheme
    Googling heap size for XP
    CHE22 writes:
    You are right; -Xms1600M works well, but -Xms1700M bombs
    Thats one best practice or what?
    On garbage collection, there are numerous posts, and much "masters of Java black art" IMHO, And according to profiling GC is not really that much of a problem anyway? Still,
    Based on my description I was hoping for a short reply like "try setting these two parameters on your xp box, it worked for me" ...or something like that. With no takers on that one, I fear people are saying that there is nothing to be gained there?
    we read:
    [ -Xmx3800m -Xms3800m
    Configures a large Java heap to take advantage of the large memory system.
    -Xmn2g
    Configures a large heap for the young generation (which can be collected in parallel), again taking advantage of the large memory system. It helps prevent short lived objects from being prematurely promoted to the old generation, where garbage collection is more expensive.
    Unless you have problems with pauses, try granting as much memory as possible to the virtual machine. The default size (64MB) is often too small.
    Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the virtual machine. On the other hand, the virtual machine can't compensate if you make a poor choice.
    The -XX:+AggressiveHeap+ option inspects the machine resources (size of memory and number of processors) and attempts to set various parameters to be optimal for long-running, memory allocation-intensive jobs]
    So is Setting -Xms and -Xmx and -XX:AggressiveHeap
    best practice? What kind of performance improvement should we expect?
    Concerning JIT:
    I read this one
    [the impact of the JIT compiler is obvious on the graph: at startup the time taken is around 500us for the first few values, then quickly drops to 130us, before falling again to 70us, where it stays for 30 minutes,
    for this specific issue, I greatly improved my performances by configuring another VM argument: I set -XX:CompileThreshold=50]
    The size of the cache can be changed with
    -Xmaxjitcodesize
    This sounds like you should do something with JIT args, but reading
    // We disable the JIT during toolkit initialization. This
    // tends to touch lots of classes that aren't needed again
    // later and therefore JITing is counter-productiive.
    java.lang.Compiler.disable();
    However, finding
    the sweet spots for compilation thresholds has been tricky, so we're
    still experimenting with the recompilation policy. Work on it
    continues.
    sounds like there is no such straigth forward path, it all depends...
    Ok, its good, when
    [Small methods that can be more easily analyzed, optimized, and inlined where necessary (and not inlined where not necessary). Clearly delineated uses of data so that usage patterns and lifetimes are apparent. ]
    but when I read this:
    [The virtual machine is responsible for byte code execution, storage allocation, thread synchronization, etc. Running with the virtual machine are native code libraries that handle input and output through the operating system, especially graphics operations through the window system. Programs that spend significant portions of their time in those native code libraries will not see their performance on HotSpot improved as much as programs that spend most of their time executing byte codes.]
    I have the feeling that we might not able to improve performance that way?
    Any comments?
    otherwise i was wondering about
    -XX:CompileThreshold=50 -Xmaxjitcodesize (large, how large?)
    Somehow, we still feel that someone out there should have experienced similiar problems? But obviously there is no guarantee that the someone should surf by here!
    In c++ we used to just write everything ourselves. Here it does seem to be a question about the right use of other peoples stuff?
    Where you are kind of hoping for a shortcut, so you dont have to read endless number of documents, but can find a short document that actually addresses your problem ... well.
    -Simon
    Edited by: simoncpm on Mar 15, 2010 3:43 PM
    Edited by: simoncpm on Mar 15, 2010 3:53 PM

  • Java Server Faces best Practices

    Hello,
    I am starting to like JSF. But I want to know the following.
    Where is the contoller in a JSF application, is it the Managed Bean???? or a Backing Bean?
    Should I put my Business Delegate on a Managed Bean? or in a Backing Bean? What is the difference?
    Can anyone tell?.
    I would like to know JSF Best Practices? Blueprints?.
    Thanks.
    Saludos,
    <Rory/>

    Hi Senthruan,
    The documentation referenced by the following links is based on an older version of JavaServer Faces technology.
    Backing Beans:
    http://java.sun.com/webservices/docs/1.3/tutorial/doc/J
    FUsing3.html
    UI Components :
    http://java.sun.com/webservices/docs/1.3/tutorial/doc/J
    FUsing3.html (just as a visual example).You should instead use the J2EE tutorial (http://java.sun.com/j2ee/1.4/docs/tutorial/doc/index.html), which is based on version 1.1 of JavaServer Faces technology. The JavaSErver Faces material starts with chapter 17. The topics have moved around a bit, but the tutorial has a nice search engine, so you shouldn't have trouble finding the information you need.
    Jennifer

  • Best practice concerning embedding script in report vs.  controlling from Java

    Hi,
    I'm faced(probably not the only one) with adding some intelligence to my reports.  In a prior post I was curious about displaying/hiding sections based on conditions found in the bean/pojo. 
    Is there a best practice concerning embedding logic in the report in the form of formula(s), vs. using Java to get or create a field and then creating a formula on the fly?  I suspect the answer has something to do with truely dynamic fields, and perhaps a little bit of both Java, and script.
    Anyone on staff care to try answering??
    Peter

    Hi,
    log into your SAP ERP system using the SAP GUI and choose in the SAP Menu the following path:
    SAP Menu -> Accounting -> Controlling -> Cost Cetner Controlling ->Environment->Set Controlling Area.
    Set the desired controlling area for your user there (DO NOT FORGET TO CLICK ON THE DISKETTE ICON) and try again.
    Regards,
    Stratos

  • Best practice versioning XI szenario with java proxies

    Our XI scenario includes services with java proxies.
    By changing the message structure, java proxies have to be changed, too. But not all connected application can  migrate to new message structure at the same time.
    I need a solution to run different versions of the same scenario at same time. This is also a requirement to my java proxy service. Thus I need to run the java proxy service under different versions.
    To maintain minor and major changes, what would be the best practice for versioning in XI scenarios and for java proxies?
    Is there a better solution than use different namespaces inside the XI scenarios (e.g. add the version number) and use different EJB-names for the proxy services (e.g. include the version number in the name)?
    Thanx,
    Michael

    Hi Michael,
    I don't have a best practice for you, but...
    In general, you tend to run into trouble when making incompatible changes to message structures. Since the Software Component Version is not used as part of the message key (only interface and namespace is used), you can't use this versioning mechanism to help you. Thus you are forced to create a new interface instead of a new version of the same interface. If you can think up a new name (that won't just cause confusion), fine. Otherwise you are forced to include an version identifier in either the interface name or the namespace. The latter is used in some public XML schema definitions e.g. xCBL. In other words: versioning via namespaces. Not exactly elegant, but perhaps your best bet.
    Best regards,
    Thorsten

Maybe you are looking for