Best Practices Q - how to allocate memory to each instance

We are moving to a system setup that will include multiple CF
instances per machine, with the web server on an entirely separate
machine (Distributed CF Mode). What is the current thinking on how
much of the total RAM on a machine should be apportioned among the
instances in CF on one machine in each JVM config?
For example (and this is for the purposes of demonstration,
not real world figures) if we have 1gigabyte of RAM on a machine
with 3 instances running, do we devote two thirds of this (750
megabytes) for all 3 instances in total, such that each instance
gets 250 megs? Or does each instance get 750? My thinking is that
if each had 750 there is the potential for a machine to be brought
down?
I came up with two thirds as I've been under the assumption
that in a non-multi server setup this is the general rule of thumb.
Appreciate any and all input,
thanks

If you allot more than the machine has physical memory, the
OS will just put that information into swap space. If the load is
high enough then the machine could come to a crawl.
With only 1GB of memory, I wouldn't try to run more than one
instance with 768MB as the high water mark. Maybe init the JVM at
256MB.
Knock the machine up to 4GB, then you could go with 3 512MB
instances or 2 1GB instances.

Similar Messages

  • Best Practices Question: How to send error message to SSHR web page.

    Best Practices Question: How to send error message to SSHR web page from custom PL\SQL procedure called by SSHR workflow.
    For the Manager Self-Service application we’ve copied various workflows which were modified to meet business needs. Part of this exercise was creating custom PL\SQL Package Procedures that would gather details on the WF using them on custom notification sent by the WF.
    What I’m looking for is if/when the PL\SQL procedure errors, how does one send an failure message back and display it on the SS Page?
    Writing information into a log or table at the database level works for trouble-shooting, but we’re looking for something that will provide the end-user with an intelligent message that the workflow has failed.
    Thanks ahead of time for your responses.
    Rich

    We have implemented the same kind of requirement long back.
    We have defined our PL/SQL procedures with two OUT parameters
    1) Result Type (S:Success, E:Error)
    2) Result Message
    In the PL/SQL procedure we always use below construct when we want to raise any message
    hr_utility.set_message(APPL_NO, 'FND_MESSAGE_NAME');
    hr_utility.raise_error;
    In Exception block we write below( in successful case we just set the p_result_flag := 'S';)
    EXCEPTION
    WHEN APP_EXCEPTION.APPLICATION_EXCEPTION THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    WHEN OTHERS THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    fnd_message.set_name('PER','FFU10_GENERAL_ORACLE_ERROR');
    fnd_message.set_token('2',substr(sqlerrm,1,200));
    fnd_msg_pub.add;
    p_result_message := fnd_msg_pub.get_detail;
    After executing the PL/SQL in java
    We have written some thing similar to
    orclStmt.execute();
    OAExceptionUtils.checkErrors (txn);
    String resultFlag = orclStmt.getString(provide the resultflag bind no);
    if ("E".equalsIgnoreCase(resultFlag)){
    String resultMessage = orclStmt.getString(provide the resultMessage bind no);
    orclStmt.close();
    throw new OAException(resultMessage, OAException.ERROR);
    It safely shows the message to the user with all the data in the page.
    We have been using this construct for a long time for all our projects. They are all working as expected.
    Regards,
    Peddi.

  • Best practices on how to implement logging in custom application

    In the Enterprise Manager it is possible to display/browse the content of different log files generated by the application server modules.
    I have some custom web applications which currently use the log4j framework to write to log files. Is it possible to make these log files accesible for the Enterprise Manager ?
    Or what is the best practice on how to implement logging in custom applications which can be browsed in the Enterprise Manager (I do not want to use system.out)

    I thought that this could be done. An ex- colleague did this - but he didn't tell me how to solve this.
    But as it just took 10 minutes to solve this I believe it's fairly easy.
    cu
    Andreas

  • How to allocate memory dynamicall​y?

    Hello,
    I'm dealing with an array of fourteen (1024x8192 pix) images.
    Everything works correctly when the images are smaller,
    but when I use 1024x8192 ones for the calculations
    I run out of memory after 7th image.
    So is there a way for me to dynamically allocate memory size?
    So that IMAQ,arrays etc. do not keep allocating new space for
    data everytime.
    Labview memory raises up to 1 300 000K
    based on Windows Task Manager and crashes.
    I have 2GB memory on this machine.
    Best Regards,
    Ari

    No matter how much memory you have on the machine, you will never be able to use more than 1.5GBytes on a WindowsXP machine.  If you got 1.3GBytes, you did well, since the LV code must fit into that 1.5GBytes, as well.  The reason is that Windows is a 32 bit operating system and LabVIEW is a 32 bit executable.  It has an absolute maximum limit of 4GBytes.  Windows reserves the negative numbers for system code.  It also reserves the top 512MBytes of the positive 2GBytes for system DLLs.  That leaves everything else with 1.5GBytes.  Depending on your memory fragmentation and what other code is running, you will be able to use somewhat less than this.  So you probably need to rewrite to only load what you are actually using.
    For tips on dealing with large memory issues, check out Managing Large Data Sets in LabVIEW.
    Good luck.  Let us know if we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Best practice for how to access a set of wsdl and xsd files

    I've recently beeing poking around with the Oracle ESB, which requires a bunch of wsdl and xsd files from HOME/bpel/system/xmllib. What is the best practice for including these files in a BPEL project? It seems like a bad idea to copy all these files into every project that uses the ESB, especially if there are quite a few consumers of the bus. Is there a way I can reference this directory from the project so that the files can just stay in a common place for all the projects that use them?
    Bret

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • How to allocate memory and to get the start address

    im just trying to allocate a memory of 8MB and to get the start address of that mem,  by getting the address i will access that memory by dll function..  so i want to know how to allocate the memory and to get the address.. any one knows then pls tell me

    duplicate post
    Now is the right time to use %^<%Y-%m-%dT%H:%M:%S%3uZ>T
    If you don't hate time zones, you're not a real programmer.
    "You are what you don't automate"
    Inplaceness is synonymous with insidiousness

  • Best practice on how to handle employees who do not have a last name?

    We are a Canadian based company with some International employees. We have recently begun to enter the International employees into the HR module. This has led to some problems for employees from India who do not have both a first name and a last name as many of our downstream systems require both names.
    I'm wondering what other companies with International employees have done in this circumstance. Can someone recommend a Best Practice?  We want to ensure that whatever we do is not offensive to anyone.
    Thanks.

    Dear,
    Indian names vary from region to region. Sometimes Names also influence by religion and caste. Different languages spoken in India in different regions. This variety makes confusing differences in names and their styles.
    Now come to the point, since you are international company, while entering the names of your international employees - i would like to suggest to consider the employees names as mentioned in their passport (If they hold valid passport). In case of non availability of passports consider their bank information or any other available information so that they didnt face any further problems like visa, banking transactions etc etc.
    1. Maddepalli Venkata Ramana Rao
    In this case Maddepalli will be his surname, Venkata Ramana can be his first name and Rao can be mentioned as Second / last name.
    2. Hardev Singh
    In this case you didnt find a surname... Singh will be considered as Surname or his ethinic recognition. In this case you can enter Hardev as First name and Singh as last name.
    Make some entry fields are optional depending on the situation. Take help of an Indian origin employees help exists in your office.
    Regards,
    Syed Hussain.

  • ES2 best practice for how much stuff should be in one application?

    I'm wondering if there is a best practice/recommended amount of the maximum amount of forms/processes/etc that you should have contained within one application in ES2?  I have an application which has about 5 processes, and has over 300 xdp forms.  When "deploying" the application it takes probably over 5 minutes or longer.  It seems to be working fine but i'm curious if this will cause any problems and if there is a recommended threshold?

    I don't think there is a limit on the number of processes & forms to be used within an application.
    However there is recommendation for not having more than 20 variable in a single process.
    Each process created within you application will become a service. So it doesn't matter having 500 processes in one application or 50 processes in 10 applications. You will endup with 500 services deployed into Java Runtime.
    Forms also doesn't bother about the count as it just stay within repository (not in Java Runtime).
    The only issue with enormous resources within an application is the response time to Deploy to application server (which you already mentioned here).
    So, if you can split your resources into manageable units, that will reduce your checkin/deploy time.
    Nith

  • Best practices on how to document code?

    Hi,
    I tried searching the web for tutorials or examples, but couldn't come up with anything. Can anyone sum up some of their best practices to document LabVIEW code? I'm talking about a fairly elaborate program, built with a state machine approach. It has several subVI's. Since it is important, that other people can understand my code, I guess documentation is fairly important, but NI hasn't got a tutorial for it yet. Maybe a suggestion ?
    Thank you for your time! This forum has been a valuable companion already!
    Giovanni
    PS: I'm using LabVIEW 8.5 btw
    Giovanni Vleminckx
    Using LabVIEW 8.5 on Windows7
    Solved!
    Go to Solution.

    Ben wrote:
    F. Schubert wrote:
    Document the state machine. You can use bubble-and-arrow (using any Draw program) or uml (for beginners it is easy to start with dia). Create a png from the diagram and paste it on the block diagram.
    Some use a stacked sequence with frame 0 the picture and frame 1 the code.
    For the wires of the state machine, it's good if you lable them on both shift registers (outside the while).
    Felix
    That brings up a good point, both sides or just one?
    For small diagrams that easily fit on one screen, putting the lables on both sides can somethimes increase the diagram size by 30%.
    When the diagrams are small, I usually only lable one side. The other plus with only putting th labels on one side is that I only ohave one set of labels to allign with the SR while two side would require twice as much shuffling.
    So I often bend that rule.
    Ben
    Of course If you MUST use old LabVIEW verions without Wire Labels the allignment (and BD Cleanup - clean-up) can get to be a headache.  I sell "maintainability" to my customers and explain the life cycle of the systems they are purchasing.  I can usually sell the latest LabVIEW version with this argument. 
    But, I do have customers stuck in 6.1 so the point is valid.  So (IMHO) SRs need a label on only one side, since they run straight, either on the terminal(prefered) or outside the loop.  Linked Tunnels need labels on both sides and tunnels that are not linked need just one label.
    And Felix brought up some great points.  Any project should have a "Tree vi" with major VIs / Modules on the BD (Hint show labels).  This Makes navagation very easy when you have a few hundred subvis.  And of course, this practice REQUIRES meaningful Icons and VI names or you are just looking at clutter.
    Jeff

  • Best Practices for Optimizing CPU and Memory

    It would be really helpful if someone would summarize the best way to set things up between Concert Level, Set Level, and Patch level, along with using aliases, to make sense of the most efficient ways to set up a new concert in MainStage. I have spent quite a bit of time messing with it all, and have a pretty good understanding, but a summarized document and "rules" would be very helpful.
    As an example, I have two guitars that plug into separate inputs in my hardware interface. But I want to use the same channel strip for both (with plug-ins for effects and lead sounds). So, I have some patches that select Input 3 for one guitar, and Input 4 for the other. Am I duplicating all of my plug-ins and effects by doing this, or is there a more efficient method?
    I have come across similar "puzzles" when using MainStage, and can always make things work, but am never quite sure what is most efficient. Has anyone seen such a document or best practices guide? Or can you just share your learnings here?

    As far as I know, the only load on the CPU is the active patch. All the others are 'bypassed' according to the manual. My own experience backs up this claim. When I experiment with sounds - several synths with complex FX plugins - I usually put a large CPU meter on my layout to monitor the load. The load seems to be fairly similar whatever channel strips & synths I have. (NB normally I only have one synth in each set. If I have several synths in one set, that does change the load.)
    FX including Amp Designer don't put as much load on the processor as synths as far as I can see.
    Amp settings are a bit of a nuisance as you can't call up Amp Designer settings from a controller yet. That means if you need more than one amp setting, you will need more than one instance of Amp Designer. In that case, you might as well load it in each set's channel strip. You shouldn't notice any gain in CPU load.
    Pedal Board: if you have a consistent set of pedals that you move between all the time, then put the board in an Aux channel. That way, you will only have to set up the controls once. My GP guitar pedal board layout is Wah, Overdrive, Delay, Chorus & Flange. All have Bypass buttons allocated & some have extra controls set e.g. Chorus & Flange parameters & Delay times. The Overdrive usually has a fairly comprehensive set of controls depending on the pedal I choose.
    This allows me to change the sounds quite a lot with a minimum of controls.
    Do you put your sound into a guitar amp or a PA system? I put my rig into a PA from the computer. I have found that I get the best, most 'natural' electric sound by not using Amp Designer, but by using Channel EQ instead. I have an EQ patch that sounds very close to a Peterson clean channel (Peterson amps are an English made transistor amp that sounds rather like a Mesa Boogie 20 watt valve amp). When I add FX to that, I can get a very wide range of amp sounds very easily. It might be worth while exploring EQ settings.

  • Best Practices for zVM/SLES10/zDB2 environment for dialog instances.

    Hi,  I am a zSeries system programmer who has just completed an IBM led Proof of Concept which demonstrated the viability of running SAP instances on SUSE SLES10 Linux booted in zVM guests and accessing zDB2 data via hipersockets. Before we build a Linux infrastructure using the 62 IFLs we just procured, we are wondering if any best practices for this environment have been developed as an OSS note or something else by SAP.    Below you will find an email which was sent and responded to by IBM and Novell on these topics...
    "As you may know, Home Depot has embarked on an IBM led proof of concept using SUSE SLES10 running in zVM guests on IBM zSeries hardware to host SAP server instances.  The Home Depot IT organization is currently in the midst of a large scale push to modernize our merchandising and people systems on SAP platforms.  The zVM/SUSE/SAP POC is part of that effort, as is a parallel POC of an Intel Blade/Red Hat/SAP platform.  For our production financial systems we now use a pSeries/AIX/SAP platform.
          So far in the zVM/SUSE/SAP POC, we have been able to create four zVM LPARS on IBM z9 hardware, create twelve zVM guests on those LPARS, boot SLES10 in those guests, install and run SAP instances in those guests using hipersockets for access to our DB2 SAP databases running on zOS, and direct user workloads to the SAP instances with good results.  We have also successfully developed cloning scripts that have made it possible to create new SLES10 instances, configured and ready for SAP installs, in about 10 seconds using FLASHCOPY and IBM DASD.
          I am writing in the hope that you can direct us to technical resources at IBM/Novell/SAP who may be able to field a few questions that have arisen.  In our discussions about optimization of the zVM/SUSE/SAP platform, we wondered if any wisdom about the appropriateness of and support for using zVM capabilities to virtualize SAP has ever been developed or any best practices drafted.  Attached you will find an IBM Redbook and a PowerPoint presentation which describes the use of the zVM discontiguous shared segments and the zVM named saved system features for the sharing of reentrant code and other  elements of Linux and its applications, thereby conserving storage and disk resources allocated to guest machines.   The specific question of the hour is, can any SAP code be handled similarly?  Have specific SAP elements eligible for this treatment been identified? 
          I've searched the SUSE Knowledgebase for articles on this topic to no avail.  Any similar techniques that might help us reduce the total cost of ownership of a zVM/SUSE/SAP platform as we compare it to Intel Blade/Red Hat/SAP and pSeries/AIX/SAP platforms are of great interest as we approach the end of our POC.  Can you help?
          Greg McKelvey is a Client I/T Architect at IBM.  He found the attached IBM documents and could give a fuller account of our POC.  Pat Downs, IBM zSeries IT Architect, has also worked to guide our POC. Akshay Rao, IBM Systems IT Specialist - Linux | Virtualization | SOA, is acting as project manager for the POC.  Jim Hawkins is the Home Depot Architect directing the POC.  I've CC:ed their email addresses.  I am sure they would be pleased to hear from you if there are the likely questions about what the heck I am asking about here.  And while writing, I thought of yet another question that I hoping somebody at SAP might weigh in on; are there any performance or operational benefits to using Linux LVM to apportion disk to filesystems vs. using zVM to create appropriately sized minidisks for filesystems without LVM getting involved?"
    As you can see, implementation questions need to be resolved.  We have heard from Novell that the SLES10 Kernel and other SUSE artifacts can reside in memory and be shared by multiple operating system images.  Does SAP support this configuration?  Also, has SAP identified SAP components which are eligible for similar treatment?  We would like to make sure that any decisions we make about the SAP platforms we are building will be supportable.  Any help you can provide will be greatly appreciated.  I will supply the documents referenced above if they are not known to any answerer.  Thanks,  Al Brasher 770-433-8211 x11895 [email protected]

    Hello AL ,
    first, let me welcome you on board,  I am sure you won't be disapointed with your choice to run SAP on ZOS.
    as for your questions,
    it wan't easy to find them in this long post , so i suggest you take the time to write a short summary that contains a very short list of questions.
    as for answers.
    here are a few usefull sources of information :
    1. the sap on db2 for Z/os sdn page :
    SAP on DB2 for z/OS
    in it you can find 2 relevant docs :
    a. best practices for ...
    b. database administration for db2 udb for z/os .
    this second publication is excellent , apart from db2 specific info , it contains information on all the components of the sap on db2 for z/os like zlinux,z/vm and so on ...
    2. I can see that you are already familiar with the ibm redbooks , but it seems that you haven't taken the time to get the most out of that resource.
    from you post it is clear that you have found one usefull publication , but I know there are several.
    3. a few months ago I wrote a short post on a similar subject ,
    I'm sure its not exactly what you are looking for at this moment , but its a good start , and with some patience you may be able to get some answers.
    here's a link
    http://blogs.ittoolbox.com/sap/db2/archives/index-of-free-documentation-on-sap-db2-administration-14245
    good luck.
    omer brandis.

  • Best practice for upgrading task definition without deleting task instances

    best practice for upgrading task definition in production system without deleting or terminating task instances
    If I try and update a task definition with task instances running I get the following error:
    Task definition 'My Task - Add User' may not be modified while there are active task instances
    Is there a best practice to handle this. I tried to force an update through the console but that didn't work. I tried editing the task from the debug page and got the same error.

    1) Rename the original task definition.
    2) Upload the new task definition with the original name.
    3) Later, after all the running tasks have timed out, delete the old definition.
    E.g., if your task definition is "myWorkflow":
    1) Rename "myWorkflow" to "myWorkflow-old-2009-07-28"
    2) Upload the new task definition as "myWorkflow".
    Existing tasks will stay linked to the original (renamed) workflow definition.
    New tasks will use the new definition.
    As the previous poster notes, depending on the changes you are making, letting the old task definitions stay active could have bad side-effects and might be better avoided.

  • How to allocate memory

    How to force Java to allocate enough memory for a 20 Million character String.

    Actually it would be:
    StringBuffer b = new StringBuffer(20000000);Probably.
    But if you have a few extra bytes of overflow I believe the algorithm doubles the space. Might be a bit much just for a couple of bytes.
    Allocating a 2200000000 element array is a bit more difficult... any ideas?2 Billion?
    Yes, one idea. Don't do it. It isn't possible. On any OS.
    Windows and earlier versions of Sun had an addressable limit of 4 gig (billion). Using a 2 billion elements would mean each element could only occupy 2 bytes of space. And there won't be much room for the program to do anything else.
    And that is addressable space. On windows an application can't use more than 2 gig, the OS reserves the other 2 gig. And a Sun box still has to have the OS components that the program needs in the addressable space.
    I am guessing that you could do this on C/C++ using the 64 bit addressable space of Solaris 8.
    However, every Sun JVM, at least before 1.4, always limits the max heap size to less than 4 gig. And a java object will take far more than 2 bytes of space.
    It might just be possible on a Solaris box, but only if the elements are 'characters' and nothing else. Since Java uses a UTF8 format, if the characters are actually latin, then it will only take 2 gig. And I believe one can tweak the JVM up above that. But the box will have to have more than 2 gig of memory. At least if you expect the program to do processing without thrashing the harddrive.

  • How to allocate memory in Weblogic 8.0.1.5

    HI,
    Can anyone help me out with this one? Weblogic Server is slow and it always hang up. here is the details of the setup:
    OS: Windows 2000 Server
    Weblogic: BEA 8.0.1.5
    The physical memory of the server is 8GB but weblogic server is just using 1GB.
    I already tried changing the JAVA Heap to 2gb during the start up and yet it still uses 1GB.
    here is what I do:
    $ java -XX:NewSize=512m -XX:MaxNewSize= 2048m -XX:SurvivorRatio=8 -Xms512m -Xmx2048m
    Please let me know how can I allocate more memory for the Weblogic.
    thanks..

    Hi,
    The below link might help you :-
    http://renjan-thomas.blogspot.com/2009/08/java-virtual-machine-heap-size.html
    Edited by: user8650794 on Aug 18, 2009 5:08 AM

  • Best practice with Listeners to avoid memory leaks

    Hi guys,
    I'm using Javafx to develop an application. I'm creating views with fxml. I'm experiencing some memory problems perhaps because I can not understand the life cycle of the controller and the views.
    For example, the listeners that I add in the controller to the components of ui, must be removed before going out from the view?
    And in a case in which there is a inner listener? For example:
    tabellaClienti.getSelectionModel().selectedItemProperty().addListener(new ChangeListener<Cliente>() {
                   @Override
                   public void changed(ObservableValue<? extends Cliente> property, Cliente oldSelection, Cliente newSelection) {
                        seleziona(newSelection);
              });in this case the listener when is destroyed?
    I'm experiencing that I got the same view (with all components) in memory many times.
    There is a guideline to follow to be sure to avoid making mistakes?
    Thanks!

    You should either remove your listener or, if you don't know when (because you lose track of your tabellaClienti), you can use WeakChangeListener.
    http://docs.oracle.com/javafx/2/api/javafx/beans/value/WeakChangeListener.html
    But I wonder why there a leaks at all, because you listener is only in the view and not on another reference, so it can be garbaged collected.
    Note that you must have a private class variable in you view holding a reference to your ChangeListener then.
    Do you know a good tool, how you can see which objects are in memory and cannot be garbaged collected? We constantly have memory issues, too...

Maybe you are looking for