Coldfusion Multi instance Memory Issues

Hello,  we recently got brand new servers with 8gb of ram and 64bit Windows 2008.  We have about 7 instances created on these servers and I am noticing something extremely disturbing. On 2 of the instances which I just created today that have absolutely no sites running on them yet as we are still migrating sites.  Coldfusion immediately consumes 700 to 900 mb of working set memory.  this is for all instances which then makes my server seem like it is out of memory.  on the old box it only took the about of working memory that is needed and this would grow over time but not immediately upon starting the server.  I started on of these instances and litterally watched as it took 850mb of ram within 2 minutes of starting and it doesn't get released.
I do have the jvm set to 2gb and they do for now all share the same jvm.  I am just curious if anyone else running 64bit Windows 2008 is having the same issue and if it is just normal behavior with 64bit systems.  We moved to these beefy servers to help with memory issues but it seems I am still plauged with the issue even when there is no site allocated to the instance.
Any ideas and thoughts would be appreciated.
thank you.

If you have a min memory parameter in your jvm.config (-xmxMin or somethingn like that, then each instance on start will immediately reserve that amount of memory.
This is a java thing not CF.  With CF8, I don't think having the min memory value matters with the most recent java versions.  I previous versions you would occasionally get out of memory errors if you didn't, but I haven't heard of this since CF8 came out.
They may have also fixed something in 8 to help alleviate that issue.
So chaulk it up as normal.  With 7 instances you'll probably run into more CPU issues.
Byron Mann
[email protected]
[email protected]
Software Architect
hosting.com | hostmysite.com
http://www.hostmysite.com/?utm_source=bb

Similar Messages

  • QUESTION: Multi-instance memory requirements on Linux

    Hi, all.
    I've been out of the loop on Oracle technical details for awhile, and need to re-educate myself on a few things. I'm hoping someone can point me to a book in the online docs which discusses my question.
    Oracle db 10.2.0.2, on Redhat Linux 2.6.9-67.0.0.0.1. This server is a virtual machine, on a VMWare ESX server.
    My question concerns the utilization of memory resources in a multi-instance environment.
    I currently have 2 instances/dbs on this server. Each was configured with an SGA_TARGET of approximately 900MB. java_pool_size, large_pool_size and shared_pool_size are also assigned values in the pfile, which I believe supersedes SGA_TARGET.
    I am tasked with determining if the server can handle a third instance. It's unclear how much load the database will see, so I don't yet know how much memory I will want to allocate to the shared pool for the buffer cache, etc.
    I wanted to see how much memory was being used by the existing instances, so on the server I attempted to capture memory usage information both before, and after, the startup of the second instance.
    I used 'top' for this, and found that the server has a total of 3.12GB of physical memory. Currently there's about 100MB free physical memory.
    the information from 'top' also indicated that physical memory utilization had actually decreased after I started the second instance:
    Before second instance was started:
    Mem: 3115208k total, 3012172k used, 103036k free, 46664k buffers
    Swap: 2031608k total, 77328k used, 1954280k free, 2391148k cached
    After second instance was started:
    Mem: 3115208k total, 2989244k used, 125964k free, 47144k buffers
    Swap: 2031608k total, 89696k used, 1941912k free, 2320184k cached
    Logging into the instance, I ran a 'show SGA', and got an SGA size of about 900MB (as expected). But before I started the instance, there wasn't anywhere near than amount of physical memory available.
    The question I need to answer is whether this server can accomodate a third instance. I gather that the actual amount of memory listed in SGA_TARGET won't be allocated until needed, and I also understand that virtual memory will be used if needed.
    So rather than just asking for 'the answer', I'm hoping someone can point me to a resource which will help me better understand *NIX memory usage behavior in a multi-instance environment...
    Thanks!!
    DW

    Each was configured with an SGA_TARGET of approximately 900MB. java_pool_size, large_pool_size and shared_pool_size are also assigned values in the pfile, which I believe supersedes SGA_TARGET.
    Not quite. If you set non-zero values for those parameters as well as setting SGA_TARGET, then they act as minimum values that have to be maintained before extra free memory is distributed automatically amongst all auto-tuned memory pools. If you've set them as well as SGA_TARGET, you've possibly got a mish-mash of memory settings that aren't doing what you expected. If it was me, I'd stick either to the old settings, or to the new, and try not to mix them (unless your application is very strange and causes the auto-allocate mechanism to apportion memory in ways you know are wrong, in which case setting a floor below which memory allocations cannot go might be useful).
    3GB of physical memory is not much these days. The general rule is that your total SGAs should not make up more than about 50% of physical memory, because you probably need most of the other 50% for PGA use. But if your users aren't going to be doing strenuous sorting (for example), then you can shrink the PGA requirement and nudge the SGA allowance up in its place.
    At 900MB per SGA, you can have two SGAs and not much user activity. That's 1800MB SGA plus, say, 200MB each PGA = 2200MB, leaving about 800MB for contingencies and Linux itself. That's quite tight and I personally wouldn't try to squeeze another instance of the same size into that, not if you want performance to be meaningful.
    Your top figures seem to me to suggest you're paging physical memory out to RAM already, which can't be good!

  • SSAS 2012 multi-instance installation issue

    Hello,
    Just wanted to share an issue and the resolution. Any feedback will be appreciated.
    Symptoms
    After installing the second named instance of SSAS 2012 on the same machine the first instance stops working.
    Cause
    The problem is that the second SSAS instance installed on the same machine doesn’t create its own INI file, but overwrites the original one, created by the first instance. The second instance’s
    service is configured to use the overwritten INI. Note that the overwritten INI is still located in its original folder, ex. C:\Program Files\Microsoft SQL Server\MSAS11.SSAS01\OLAP\Config\msmdsrv.ini.
    As a result, the second instance’s service works well, but the first one will fail to start once it’s shut down for the first time.
    Resolution
    For the sake of simplicity we will use names SSAS01 and SSAS02 for the first and the second named SSAS instances installed on the same machine called SERVER01. The default instance root directory
    will be C:\Program Files\Microsoft SQL Server.
    1. Install both named instances normally, using same instance root directory, but separate folders for Data, Log. etc.
    2. Browse to the first instance “Config” folder C:\Program Files\Microsoft SQL Server\MSAS11.SSAS01\OLAP\Config. Make sure that the file msmdsrv.ini exists and folder locations within it
    point to SSAS02.
    3. Browse to the second instance folder C:\Program Files\Microsoft SQL Server\MSAS11.SSAS02\OLAP. Make sure that the “Config” subfolder does not exist.
    4. Create “Config” subfolder under C:\Program Files\Microsoft SQL Server\MSAS11.SSAS02\OLAP
    5. Change folder permissions:
    In C:\Program Files\Microsoft SQL Server\MSAS11.SSAS01\OLAP\Config
    Remove SQLServerMSASUser$SERVER01$SSAS02 local group
    Add SQLServerMSASUser$SERVER01$SSAS01 local group with Full permissions
    In C:\Program Files\Microsoft SQL Server\MSAS11.SSAS02\OLAP\Config
    Add SQLServerMSASUser$SERVER01$SSAS02 local group with Full permissions
    6. Copy INI file:
    C:\Program Files\Microsoft SQL Server\MSAS11.SSAS01\OLAP\Config\msmdsrv.ini
    To:
    C:\Program Files\Microsoft SQL Server\MSAS11.SSAS02\OLAP\Config\
    7. Edit the original INI file: C:\Program Files\Microsoft SQL Server\MSAS11.SSAS01\OLAP\Config\msmdsrv.ini.
    Replace all “SSAS02” appearances with “SSAS01”.
    8. Edit Registry:
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSOLAP$UATSSAS02]
    Change ImagePath value to:
    "C:\Program Files\Microsoft SQL Server\MSAS11.UATSSAS02\OLAP\bin\msmdsrv.exe" -s "C:\Program Files\Microsoft SQL Server\MSAS11.SSAS02\OLAP\Config"
    9. Start the services for both instances.

    Hi Grab,
    Thank you for your sharing such useful information. It would be benefical to othere ofurms members who encounter the similar issue.
    Since this is not a question thread, I changed it form Question type to Discusstion type.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Coldfusion memory issue

    Hi,
    One of our Public web site is developed in ColdFusion. This runs on a server having OS Windows Server 2003 Standard Edition with 4 CPUS and 4 GB RAM.
    The issue is that, after every couple of hours, the Coldfusion Jrun processer memory utilization reaches 1 GB on this server and the response of the website
    becomes very slow and it goes down. As a temporary fix, we are getting the Coldfusion services restarted to resolve this issue.
    Server product  : ColdFusion
    Version  : 8,0,1,195765 
    Operating System : Windows server 2003 R2 Standard Edition Service pack 2
    Java      : 1.6.0_13
    We are now planning to go for an OS upgrade from Windows 2003 Standard edition to Windows 2003 Enterprise edition.
    Also planning to increase number of CPUs from 4 to 8 and to increase RAM from 4 GB to 8 GB.
    Can someone please advise whether after the upgrade and increase in RAM and CPUs, can Coldfusion Jrun processer use the additional memory or not?
    Thanks!
    Siva

    It's possible some JVM tuning may be in order.  This can be a very complex process.  If this is something you aren't comfortable doing yourself, there are a few consultants who could assist for a fee (Charlie Arehart and Mike Brunt are two that come to mind).
    Upgrading the Windows version and adding memory will only help up to a point if you are running a 32-bit version of your operating system, as the 32-bit version of Java can typically only utilize up to about 1.8 GB of memory.  If you really want to give ColdFusion more memory, you should consider using a 64-bit operating system and the 64-bit version of Java.  I'm not sure if you can run ColdFusion on 64-bit though, so make sure that your edition of ColdFusion is supported on 64-bit operating systems (I know ColdFusion 9 Standard and Enterprise are, but I'm not sure about ColdFusion 8).
    -Carl V.

  • How to create per instance jvm.config files for multi-instance ColdFusion Cluster ?

    So when we have created our coldfusion 9 instance on solaris all the files and settings of the master instance are copied except the jvm.config file. This means that any changes made here are used for all instances of ColdFusion on that Node.  Now I want to play with memory settings for fine tuning of Application performance and I want to do it on one single instance. I want to know the process of creating individual jvm.config files for each instance.
    Thanks
    Pradeep

    Thanks Anit,  I did whatever is posted in the KB Article. Now I get this below error #
    jrunx.xml.XMLMetaData$CouldNotCreateDocumentException: Could not create document from location 'file:/data/www/appserver/jrun/lib/servers.
    xml'
            at jrunx.xml.XMLMetaData.createDocument(XMLMetaData.java:1028)
            at jrunx.xml.XMLMetaData.importXML(XMLMetaData.java:200)
            at jrunx.xml.XMLMetaData.<init>(XMLMetaData.java:122)
            at jrunx.server.metadata.ServersMetaData.<init>(ServersMetaData.java:32)
            at jrunx.server.ServerManagement.refreshServersMetaData(ServerManagement.java:82)
            at jrunx.server.ServerManagement.getServerRootDirectory(ServerManagement.java:154)
            at jrunx.server.ServerManagement.getServerProperties(ServerManagement.java:191)
            at jrunx.server.ServerManagement.getSystemProperties(ServerManagement.java:204)
            at jrunx.kernel.JRun.setSystemProperties(JRun.java:688)
            at jrunx.kernel.JRun.start(JRun.java:337)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:585)
            at jrunx.kernel.JRun.invoke(JRun.java:180)
            at jrunx.kernel.JRun.main(JRun.java:168)

  • CF10 Update 14 and possible memory issues

    One of my associates is complaining that since I applied the Coldfusion 10 Update 14 we are experiencing memory issues.  Has anyone else had issues with Update 14?
    Just a System Admin fighting the good fight!

    I’ll throw in that even if changing the JVM helped, it would still leave the question open as to whether/why Chad experienced a change in heap usage on solely updating CF.
    Chad, really? Nothing else changed? I just find that so odd. I’ve not yet heard about it (or seen it) being an issue. Of course, update 14 did do quite a bit: beyond bug fixes it also updated Tomcat. I would be surprised that that could lead to memory leaks (assuming that’s what this is, if really NOTHING else changed).
    What about the database you’re using?  Update 14 did change the JDBC drivers for Postgres. Are you using that DBMS?
    Just trying to think what else could contribute to this, if indeed nothing else changed for you.
    It is possible that something else changed, either in the config or coding (and you didn’t know it), or perhaps in the load against the server (I see that all the time: someone adds a new site, perhaps brought over from another server, and they assume “it doesn’t get much traffic”, but they don’t realize how heavily spiders and bots may hit that newly added  site, which could definitely put pressure on the heap whether from increased sessions, caching, and so on.)
    Of course, you can always uninstall the update easily, in the same CF Admin page where you install it. That would help you prove if that alone was it. (Just be sure to rebuild the connectors back to the version as per the CF update you would revert to. I don’t think it’s appropriate to run the update 14 connector with an earlier update.)
    Finally, FWIW, if you really wanted to go nuts, you could change CF to using Java 8. That’s another thing added in 14: support of 8. But to clear, it does not “change it” for you, so that’s not what happened here. But just as the two Carl’s proposed changing the  JVM to see if it “would help”, you could consider moving to Java 8. That’s all the more worth considering if indeed the issue is that something changed in your environment (config/code/load) and you simply do need more heap (in Java 7).
    Of course CF will use the same GC you have specified even if you update it to use 8, so you may need to make some changes to see a real impact. But for instance one thing Java 8 does by default is no longer use the permanent generation. That should have no effect on your your observed use of heap. Just saying that 8 is indeed different, and you never know if updating to it could help (or hurt. It’s so new that CF supports it in 10, and only to come in update 3 of 11, that there’s relatively little known experience about the combination.)
    Anyway, do let us know if you find more.
    /charlie

  • Memory issues - Bring demand paging to X6.

    does anyone at nokia remember a few years ago a firmware update that added demand paging to N95 and enabled the phone to do multi-tasking properly without having always memory issues? Please bring it to the X6. It's frustating to run only one app (in this case, opera mobile) and having memory warnings always poping up... After a fresh start of the phone.
    In the wise words of a famous Top Gear presenter: how hard can it be?

    The numbers below were constantly moving as I copied them so they're not exact but here's what task manager reported. Funny thing is that it always seems to show around 3 GB available.
    Physical:
    Total - 12581952 Kb
    Available - 3377056 Kb
    System Cache - 3237124 Kb
    Kernel Memory:
    Total - 204288 Kb
    Paged - 143704 Kb
    Non-Paged - 60056 Kb
    I just saw that OEM was advising "Increase the size of the SGA by setting the parameter "sga_target" to 10752 M"

  • Heap Memory Issue in weblogic 9.2 for a JSF 1.1 web application

    Hi,
    We are running a JSF application (Myfaces, facelets, tomahawk, richfaces & iBATIS) in weblogic 9.2 server on Solaris 10. This application is deployed in production and works fine under normal circumstances. But when there is a heavy user load we are facing a memory issue. The memory usage is gradually increasing and when it reaches to max, Full GC kicks in again & again which choks up all requests. We don't save anything in session scope. All our backing beans are saved in request scope hence they should be garbage collected after each request done, but this is not happening.
    We took the heap dump from production after this issue and analyzed it. After my analysis, I found all objects which are set in request object not being garbage collected and the root referers of all these objects is weblogic.servlet.internal.MuxableSocketHTTP.
    I reproduced the similar behaviour in one of our development environment using JMeter. I ran 100 concurrent users in JMeter for almost 1 hour and saw the similar behaviour. Below is the result of all weblogic objects which are still hanging in heap after test was over (I also ran manual Garbage Collector from admin server).
    1) weblogic.servlet.internal.MuxableSocketHTTP - 1774 objects - retained heap (1 GB)
    2) weblogic.servlet.internal.ServletRequestImpl - 1774 objects - retained heap (1 GB)
    My understanding is that every request made to weblogic server goes through the MuxableSocketHTTP object which creates the ServletRequestImpl to serve it. Once the request is served these objects are suppose to be removed. As a result of that whatever is saved in your request will still be hanging.
    I am not able to understand why these objects are hanging after request is done. Could anybody answer to my question. I appreciate your help in advance.
    The GC setting for weblogic server while startup is:
    -XX:MaxTenuringThreshold=15 -XX:+PrintTenuringDistribution -XX:+AggressiveHeap -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:PermSize=128m -XX:MaxPermSize=128m -Xms3g -Xmx3g -XX:NewSize=512m -XX:MaxNewSize=1024m
    Thanks MaKK

    What happened with this issue? We are seeing something similar on WebLogic 9.2 MP1 in Solaris (Jdk 1.5. patch 10. 32 bit). Out of Memory's with thousands if instances of weblogic.socket.MuxableSocket hanging around.
    Our thinking was initally the Java heap, then we thought that maybe the sockets weren't being closed properly, possible in WebLogic or in LiveCycle.
    Any info would be greatly appreciated.
    Snippet of our stack trace:
    <16-Feb-2010 04:30:13 o'clock GMT> <Error> <Kernel> <BEA-000802> <ExecuteRequest failed
    java.lang.OutOfMemoryError: Java heap space.
    java.lang.OutOfMemoryError: Java heap space
    >
    javax.ejb.EJBException: EJB encountered System Exception: : java.lang.OutOfMemoryError: Java heap space
         at weblogic.ejb.container.internal.EJBRuntimeUtils.throwEJBException(EJBRuntimeUtils.java:145)
         at weblogic.ejb.container.internal.BaseLocalObject.postInvokeCleanup(BaseLocalObject.java:550)
         at weblogic.ejb.container.internal.BaseLocalObject.postInvokeCleanup(BaseLocalObject.java:496)
         at com.adobe.idp.um.businesslogic.directoryservices.DirectorySynchronizationManagerBean_f5g74_ELOImpl.synchronizeProviders(DirectorySynchronizationManagerBean_f5g74_ELOImpl.java:267)
    Joel

  • Is Adobe planning on fixing memory issues or are they hoping Fireworks will go away?

    When trying to use Fireworks for larger sites, we (and a lot of other people we talk to) have always had issues with Fireworks crashing (Mac and PC).
    We try to split files when working on larger sites but if you use a lot of symbols, updating them is really slow - often slower than saving files and it doesn't stop the crashing.
    I realise people might say we're asking for trouble designing pages that are 3000px - 4000px long but long scrolling pages often work well (http://www.apple.com/macbookpro/design.html http://www.kaleidoscopeapp.com/) and we find ourselves designing more and more of them.
    When using Fireworks every day, you have to wonder if Adobe would prefer everyone switch to Photoshop - does anyone know what their plans are for Fireworks?
    Practical suggestions for stopping the crashes would also be great!
    Cheers
    Ben

    Thanks Jim,
    We saw a dot release for Fireworks with the CS5.5 Suite
    When I looked at CS5.5 I didn't see Fireworks - totally missed that!
    Can you tell me, are you designing multi-page files at these dimensions? If not - good. If so, I would question why, considering that you need to move to Dreamweaver or another web editor to create the actual final site.
    Wherever possible, we actually wireframe then design directly in HTML/CSS but on bigger sites, we use developers who like Fireworks files. Plus on the bigger sites, corporate clients often prefer seeing how key pages will look/flow with content in them before they go out for coding.
    In these situations, we've tried both multi-page documents and single page documents. But with a lot of symbols, it's actually quicker to reduce the undos to 1, use multi-page docs and gingerly save after every change! We've found updating lots of symbols across multiple docs painfully slow.
    Again, I am making a lot of assumptions here, so please forgive that.
    No worries!
    Also keep in mind that the MAXIMUM page dimension for a new Fireworks document is 6000 pixels. And it sounds like you're getting awfully close to it. FW does not have the same kiknd of memory management features that Photoshop has.
    I guess this is the crux of the issue. We can load up the file size in Photoshop but Fireworks has so many features targeted to web design that it's hard to use Photoshop for this purpose.
    I thought I'd read somewhere that Fireworks will only ever use 2GB of ram and I've noticed a similar thing in Activity Monitor - Fireworks seems to only ever take about 1.98GB of ram. However this is pure speculation and I don't know this for sure.
    Anyway, thanks for the replies. I guess I was curious to see if anyone had heard what Adobe might be planning as the memory issue is surely well known to them - and it's obviously a conscious decision to not allow Fireworks to access more.
    Unless CS5.5 addresses this?
    Cheers
    Ben

  • Lightroom 3.2 out of memory issues

    I had been using the beta version of Lightroom 3 without issues.  Once I installed the shipping version I get out of memory messages all the time.  First I noticed this when I went to export some images.  I can get this message when I export just one image or part way though a set of images ( this weekend it made it though 4 of 30 images before it died ).  If I restart Lightroom it's a hit or miss if I can proceed or not. I've even tried restarting the box and only having Lightroom running and still get the out of memory issue.
    I've also had problems printing.  I go to print an image and it looks like it will print but nothing does.  This does not generate an error message it just doesn't do anything.  So far restarting Lightroom seems to fix this problem.
    When in the develop module and click on an image to see it 1:1 at times the image is out of focus.  If I click on another image and then go back to the original it might be in focus.
    I have no idea if any of this is related but I thought I'd throw it out there.  I've been using Lighroom since version 1.0 and have had very good luck with the program.  It is getting very frustrating trying to get anything done.  I search though the forum but the memory issues I found were with older versions. I'd be very grateful if anyone could point me in the right direction.
    Ken
    System:
    i7 860
    4g memory
    XP SP3

    Hi,
    You can get the HeapDump Analyzer for analyzing IBM AIX heapdumps from the below mentioned link.
    http://www.alphaworks.ibm.com/tech/heapanalyzer
    http://www-1.ibm.com/support/docview.wss?uid=swg21190608
    Prerequistes for obtaining a heapdump:
    1.You have to add -XX:+HeapDumpOnOutOfMemoryError to the java options of the server (see note 710146,1053604) to get a heap dump on its occurance, automatically.
    2.You can also generate heapdumps on request :
    Add -XX:+HeapDumpOnCtrlBreak to the java options of the server
    (see note 710146).
    Send signal SIGQUIT to the jlaunch process representing the
    server e.g. using kill -3 <jlaunch pid> (see note 710154).
    The heap dump will be written to output file java_pid<pid>.hprof.<millitime> in:
    /usr/sap/<SID>/<instance>/j2ee/cluster/server<N> directory.
    Both these parameters can be set together too to get the benefit of both the approaches.
    Regards,
    Sandeep.
    Edited by: Sandeep Sehgal on Mar 25, 2008 6:51 PM

  • How to connect local database when installing BRStudio as Multi-Instance?

    Hi,
    I have installed BRStudio 7.10 as Multi-Instance mode to plan to administrate several local oracle database in one server.
    Is this installation the correct method? Should I use dedicated instance installation?
    When I trying to create a local database instance in BRStudio, it prompts to input the remote command line to connect to DB.
    Do I have to use putty/ssh .etc to connect even a local Oracle database?
    Please suggest.
    Regards,
    Alex

    Hi Alex,
    Yes you need to connect it seems as using ssh/rsh or putty to access the local server...
    I have got a very good documentation on it... please have a look and it may solve your issue..
    [BRtools Studio Installation and Configuration|http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/009af54c-7ce4-2b10-de8e-b062f7cbbcf7?quicklink=index&overridelayout=true]
    all the best !

  • Jrun.exe Memory Issue

    I am using Coldfusion 8 Developer Edition as a testing server
    for Dreamweaver website development. I am running Windows Vista
    Home Premium and 2 Gig of memory. When I start my computer I run
    into memory issues which I have traced to jrun.exe which is a part
    of Coldfusion. For maybe the first hour jrun is using well over a
    half a gig of member. After that it drops to about 60,000k. Does
    anyone know of a way to reduce that?
    I am a mid level user with no server admin experience beyond
    setting up CF developer edition using mostly default settings. I
    have found support articles about this problem, but all of them
    were way over my head. Any suggestions need to be very specific and
    not in server admin jargon.

    You can try reducing the max heap for the java in the
    coldfusion administrator to little bit lower value so the garbage
    collection might happen sooner. Its not easy to say how much max
    heap you should set as it totally depends on your application
    usage. Also, you can try to upgrade your java version to see if
    that improves garbage collection.

  • Virtualised Multi-Instance SQL Server Cluster - Processor Resource Management

    Hi - We're in the process of implementing a multi-instance SQL 2014 guest cluster on Windows 2012 R2.  To our dismay, it seems that Windows System Resource Manager (WSRM) is deprecated in Windows 2012 R2, so we're now stuck for how best to manage CPU usage
    between SQL instances....
    As far as I can see, I'm left with two options, but both of these have problems:
    1) Use SQL Processor affinity within the guest cluster, with each SQL instance assigned to dedicated v-CPU.  However, I'm not certain that setting SQL Processor affinity within a VM will actually have the desired affect!?..
    - When there is physical CPU capacity available, I'd hope Hyper-V would provide it to whichever v-CPU is demanding it.  
    - When VM processor demand exceeds the physical CPU capacity, I'd hope the SQL instances would receive a proportion of the physical CPU time according to the number of v-CPU(s) assigned through the affinity settings.
    2) Use a VM (actually 2, because its a 2-node guest cluster) per SQL instance!..  This is not ideal, as we need multiple SQL instances and it would result in have an administrative and performance overhead
    Does anyone have any information or thoughts on this?  How can we manage a virtualised multi-instance SQL deployment now that WSRM has been deprecated?  Help me please!

    I'm not sure what are the requirements for each of the 2 VMs in in the SQL guest cluster.
    I'm assuming the guest cluster resides on a Hyper-V CSV with at least 2 Hyper-V hosts, and the 2 VMs are configured with Anti-affinity to ensure they never reside on the same Hyper-V host.
    I've been able to configure CPU resources to VMs from the standard controls in Hyper-V Manager:
    See this blog post
    What edition of SQL 2014 you're using?
    This matters because of these limitations.
    Also consider running SQL Server with Hyper-V Dynamic Memory - see Best Practices
    and Considerations
    Hyper-V performance tuning - CPU
    Hyper-V 2012 Best Practices
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Cfschedule on multi instance server

    Hi All
    Hope someone can help me on this. We run a multi instance CF server with 3 "slave" instances, which handle the site requests and the CFusion instance that handles the scheduled tasks.
    The application I'm writing has a few scheduled tasks that need setting up each time it gets installed and it is installed multiple times for a variety of clients. So far, this has been done either manually setting them up individually on the CFusiion instance or I created an admin menu extension that fires them off (Knowing the <cfschedule> is being run on CFusion, thus installing them to that instance).
    My question however is; it would much more efficient if the application could check for the existence of the scheduled task and if it's not there, install them. The problem though is as the request would be coming from initial install, thus a web request, it would be coming from one of the "Slave" instances and install the scheduled tasks to them = bad. Is there any way to direct a specific request or point <cfschedule> to install on specific instance of ColdFusion?
    Checking for the existance on the CFusion instance is easy, I can read:
    C:/JRun4/servers/cfusion/cfusion-ear/cfusion-war/WEB-INF/cfusion/lib/neo-cron.xml
    But then directing the request seems to be impossible in this way.
    I am running CF9.0.1 on a Windows 2008 Server box with IIS7.5
    Hopefully somone may be able to help
    Thanks in advance
    Tom

    No one? Bueller? Bueller?
    OK, so I may have come up with a solution, I'm just not sure how elegant it is:
    I have a seperate install.cfm that handles the scheduled task creation. The file that gets included on application start now moves this install.cfm to the /cfide directory and then does a cfhttp call to it on 127.0.0.1 ... which is bound to the cfusion instance ... thus installing the schedule tasks to the correct instance and making my OCD happy. The file is then deleted once the cfhttp returns correct setup.
    Does the job, but as I say, doesnt seem to be the most elegant of ways.
    Tom

  • Multi-instance Install for CF 8

    Hey all,
    so we have installed CF 8 for multi instance. The main
    instance will show up fine in the browser, but any additional
    instances will not show up correctly. Either the instance shows a
    blank white screen, or it gives a page not found.
    We are setting up virtual sites in IIS to set the new file
    locations.
    Need help badly with this!!
    thanks
    Dan

    First check that the ColdFusion instances are properly
    running. If you
    go to the instance manager of the main 'cfusion' instance,
    where you
    created all the sub-instances, you should be able to click on
    the icon
    that links to each instances ColdFusion Administrator. When
    you do this
    you are going to get pages using IP address and ports in the
    URL bar of
    your browser.
    If that works. Can you access static HTML content in each of
    your
    virtual servers? Reading your last post, I suspect this might
    the be
    the source of your problem. When you run mutiple websites
    from the same
    server, each site must have separate addresses. This can be
    done with
    IP addresses, ports or host-headers. Making this work is
    going to
    involve configuration in IIS and|or your domains Domain Name
    Server.
    If you can get to the ColdFusion Administration Application
    of all your
    instances and you can get to static content in each of your
    virtual web
    sites, then there maybe other problems preventing the web
    sites and
    ColdFusion working together. But confirm those two things
    first.

Maybe you are looking for

  • Why am I being charged £1.99 (£1.658) a week for something identified as "game"

    Ive noticed on my recent bill that I am being charged 1.658 plus vat every wednesday. Its identified as "game" on bill. I realise that this may be a subsciption for something except I have never signed up to anything. How on earth do I find out what

  • NEW TO THE FORUM NEED HELP INSTALLING CS2

    I just bought an older version of CS2 for Mac from the original owner and when I tried to install it the software is no longer supported by my Mac. Can I install CS2 on a new Mac? And if so how, I have all the cds and the registration number. The ori

  • Performance and usability problems with Illustrator imports

    I am usually an enthusiast advocate for Adobe Software. As a UI/Interaction designer I regularly showcase your products as a shining example. That's why I am somewhat frustrated about the Flash Catalyst Beta. I trust you guys to come out with SOMETHI

  • Solaris 3/05 no longer boots after installing 113000-07

    I can't get Solaris 10 3/05 to boot anymore after installing patch 113000-7. The patch was installed by Sun's update connection software. The computer is a Dell Dimension XPS T600 with 512MB ram. I get the error: Error 28: Selected item cannot fit in

  • Script to convert metadata to xml file - Bridge CS6/InDesign CS6

    I need a script that will convert metadata in Bridge to an xml file I can import into InDesign so that when I export long documents the alt text for photos is already included with the images.   Two problems - 1) I need the scripting for dummies vers