High Memory consumed when Dynamic memory is Enabled in Hyper-V Machine

I have created a VM using Hyper-V with Enabling Dynamic Memory option on a 2012R2 server machine.
After creating the VM I see that the memory consumption of the VM machine as 85%.
Would like to know if any efficient way to use "Enable Dynamic Memory" option while configuring the VM so that the memory consumption will be reduced.

As others have said it depends on the application being used. Also how do you have your guests configured with Startup memory, minimum memory and maximum memory? If your application is poorly written and has a memory leak, you will see dynamic memory just
grow. If the application is just heavy because of traffic or caching it will use lots of memory.
Example for me is a web application that is proprietary. during no usage it might be using roughly 400 MB on the server, but then when it gets used it will grow. I've seen some webservers hit 10 GB as for other ones grow maybe to a meg. After things calm
down a bit I notice memory returns back down to a 400 MB to 700 MB level. That is just how my application is and how i serve it up to customers. I had to track all that stuff prior. You can use Microsoft MAP to help with your dynamic memory settings.
For SQL server it can be done as well, but as Darren said before you might not want to do it. In my environment i do it, but I have tweaked things to work for me. Example is tweaking database memory usage to match startup memory OS it appears static, but
then leaving dynamic for processes that fire outside of SQL server. Here is a
guide for SQL server and dynamic memory.

Similar Messages

  • Why do I get out of memory errors when 10GB memory is free?

    I am on HP UNIX 64bit 11.23 titanium, running Oracle 10.2.0.3. . My server has 24GB memory and of that 10GB is free (as seen in glance). When I doing oracle exp or rman commands, I get:
    ORA-04030: out of process memory when trying to allocate 1049112 bytes (KSFQ heap,KSFQ Buffers)
    I checked both rman and exp are 64bit executables, so they should be able to access all the memory on the system.

    I have just one parameter in init.ora sga_target which controls everything in SGA. hw two instance I was reporting porblem have sga_target of 256M and 192M, Problems happen off and on, but 9 to 10GN free memory is alwyas available on server.
    Her is more information on the problem:
    1.     I do not think problem is with ulimit, but something is definitely not set correctly. Ulimit -a
    time(seconds) unlimited
    file(blocks) unlimited
    data(kbytes) 4194300
    stack(kbytes) 131072
    memory(kbytes) unlimited
    coredump(blocks) 4194303
    parameters are reasonable.
    2.     I have 10GB free memory. I run simple java command
    Java
    it works.
    3.     Now I increase memory for sga_target for one of my Oracle instance from 256M to 512M. I only hav eon eparameter sga_traget which controls everything in SGA. There are many other Oracle instances on the server. My Oracle instance starts without problems.
    4.     I now run java, it gives me:
    Out of memory error, so Oracle has exhausted some memory (probably shared memory) which is needed by java. I still have 9-10GB memory on my server, so why java is not using this memory.
    5.     After iOracle instance starts, off and on Oracle backups fail with ORA- error (not enough memory) I reported earlier.
    I hope HP engineers can figure this out.

  • [2008 R2] Hyper-V Dynamic memory warning with half of vHost memory free

    Our virtual server host system has about 19GB of Memory free (out of 32GB total), yet a virtual guest using Dynamic Memory was only being assigned 8GB and the demand was around 13GB, therefore generating a Warning state on the Memory Status. Logging onto
    the guest machine showed about 8GB of memory being used as well. The end-users were receiving memory errors in their applications. Any idea why the guest system was in this warning state?

    There is a perception in the OS.  And different numbers come from different places.
    In a nutshell, the RDP Server has a memory leak if you constantly disconnect and reconnect - and it ends up chewing up memory, but when available memory is tested this memory consumption is missed. If you logout of the RDP session instead of disconnecting
    then the memory is given back and can actually be used.
    It is a strange interaction with RDP that has been there since the original release.  But it is specific to using RDP to connect to the Hyper-V Server for VM Management and disconnecting without ever logging out.
    There was also a google process that many folks reported a long while back that caused memory consumption that prevented VMs from starting as well.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

  • Dynamic memory freezes

    Hi guys,
    we are using three node cluster with Windows Server 2012R2. Our virtual machines are virtual
    desktops with Windows 8.1 guest OS, 4 vCPU and 8 GB of dynamic memory.
    Sometimes dynamic memory stops working, guest OS is not able to allocate more RAM and starts to swap. This swapping slows performance of all our virtual machines, since
    all virtual machines are on same cluster shared volume. This happens at random RAM amount allocated, for example at 2.6 GB,
    which is way below our 8 GB limit.
    Live migration helps to solve this issue for migrated VM, after live migration the guest OS is able to allocate
    more ram and starts to work properly, swapping disappears. However after some time, problem appears on other VM.
    Our physical machines have 2x 10core Xeon and 394 GB of RAM. Only about 30% of RAM is used, so there is free space for all virtual machines. But we need to solve this issue so dynamic RAM starts to work properly.
    Do you have similar experience?

    Hi manasj,
    Please check event log "Microsoft-Windows-Hyper-V Worker/Admin " to see if there is any clue  .
    Also please refer to the following potetial cause :
    " In a Hyper-V Failover Cluster, a virtual machine can have its configuration information stored on cluster shared storage (a Physical disk resource or a Cluster Shared Volume (CSV). If the physical disk resource or the Cluster Shared Volume (CSV) goes Offline or Fails,
    the VM placed in a critical state. Once the storage is re-connected, the VM should no longer be in a critical state. However, virtual machine worker process (vmwp.exe) does not refresh all of its file handles."
    For details please refer to following link :
    http://support.microsoft.com/kb/2504962/en-us
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Guest keeps on consuming all the dynamic memory

    Software:
    Host: Windows Server 2012 R2 x64 fully patched
    Guest: Windows 8 Update 1 x64 (Fully Patched). 
    I've been experience this problem for a few months and really don't know where to go from here. After starting the specific guest that is having the issue it will happen anywhere between 3-7 days. The guest will consume almost all the memory. The only way
    i can log into the guest is with hyper-v manager. Remote Desktop will hang. Then when i log into the guest i can't bring up task manager or many things.  Every time i have to kill the vm and restart it. Last time i thought to keep process monitor up and
    running so i could inspect things. Well it happened today again. According the hyper-v manager the guest was consuming almost 24GB of memory:
    here is the memory demand:
    What's wierd is when i logged into the vm it was showing it only had 4GB of memory with 3.1GB used. There was hardly anything running other than a few services and they weren't using much memory. Quickly process explorer locked up on me. After logging in
    for say 5-10 minutes this always happens too. After trying to diagnose the for a few minutes it will appear i have rebooted the system because i will get the following:
    but i definitly did not reboot. If i come back in 24 hours it will still be at the same spot. Again i have to kill the vm but the same vicious cycle will ensue. I'm at a loss how to diagnose this. I've tried to scour the event viewer but it hasn't helped.
    Sure i could just turn dynamic memory off or lower the upper threshold of the dynamic memory but i shouldn't have to do that. Thats really just masking the problem. The only default value of dynamic memory i changed was the "Startup Memory". I really
    don't want to rebuild the vm but what other options do i have here?

    Well i updated one of my main programs on that machine (the on that i would think used "Most" of the memory) and i "thought" it went away but nope i was wrong. It came back. Luckily i left rammap & process explorer running. I could
    it when it was consuming almost 6GB rather than the 20+ GB like before. Here is a screenshot of rammap:
    As you can see "Paged Pool" memory is huge. I closed almost every app running on the system hoping that would help but it really didn't. The only thing that is running at this point outside of the normal windows processes is EMET 5 & Nortan
    Antivirus. I tried to stop halt "Auto Protect" & the firewall but that didn't help any.
    unfortunately, though i did catch it early, most programs will fail to load. Even file explorer fails to load. As usual after interacting with process explorer for awhile i get the white screen of death out of it. Hmmmm figure out why so much pool memory
    is being consumed.

  • I usually have 30 tabs opened, and the memory rises in a unexpected way, so that when it reaches 1.2GB, firefox crashes. Also If, for e.g. firefox is consuming 700MB of memory (with a lot of tabs opened), and if i close those tabs and leave just 2 or 3 op

    I usually have 30 tabs opened, and the memory rises in a unexpected way, so that when it reaches 1.2GB, firefox crashes. Also If, for e.g. firefox is consuming 700MB of memory (with a lot of tabs opened), and if i close those tabs and leave just 2 or 3 opened, the memory usage does not drop and remains at the 700MB. Thanks for your attention. Regards, Ricardo
    == Crash ID(s) ==
    bp-765e3c37-0edc-4ed6-a4e9-7ed612100526

    I have the same sort of problem with 30-40 tabs, with memory usage growing until crash. It seems a lot worse after restarting from standby mode on Vista - it's often highly unresponsive for a while, then crashes.
    I have a fair few plug-ins running, one of the benefits of Firefox.
    Is their a way I can log memory use of parts of Firefox, particularly the plug-ins?

  • High memory use when checking downloads folder through browser

    High memory use when checking downloads folder through browser.
    I have a total of 3 DDR2 Gbytes of ram. Pentium(R) Dual-Core CPU T4500 @ 2.30GHz Win 7 home pr. 64bit. Firefox 27.0 beta
    I right click on a file to "open containing folder" from the drop down menu in downloads.
    When the folder has been opened, explorer is trying to index the folder (which it never does completely or well after 5 minutes I close the folder and shut down the dllhost process cause laptop is blazing hot) and memory keeps increasing till no more is available.
    I checked task manager and see that dllhost is the particular process that is causing the spike in memory usage even after closing the folder I right click and stop the process and memory is restored after awhile.

    Hello,
    '''Try Firefox Safe Mode''' to see if the problem goes away. Safe Mode is a troubleshooting mode, which disables most add-ons.
    ''(If you're not using it, switch to the Default theme.)''
    * On Windows you can open Firefox 4.0+ in Safe Mode by holding the '''Shift''' key when you open the Firefox desktop or Start menu shortcut.
    * On Mac you can open Firefox 4.0+ in Safe Mode by holding the '''option''' key while starting Firefox.
    * On Linux you can open Firefox 4.0+ in Safe Mode by quitting Firefox and then going to your Terminal and running: firefox -safe-mode (you may need to specify the Firefox installation path e.g. /usr/lib/firefox)
    * Or open the Help menu and click on the '''Restart with Add-ons Disabled...''' menu item while Firefox is running.
    [[Image:FirefoxSafeMode|width=520]]
    ''Once you get the pop-up, just select "'Start in Safe Mode"''
    [[Image:Safe Mode Fx 15 - Win]]
    '''''If the issue is not present in Firefox Safe Mode''''', your problem is probably caused by an extension, and you need to figure out which one. Please follow the [[Troubleshooting extensions and themes]] article for that.
    ''To exit the Firefox Safe Mode, just close Firefox and wait a few seconds before opening Firefox for normal use again.''
    ''When you figure out what's causing your issues, please let us know. It might help other users who have the same problem.''
    Thank you.

  • High Virtual memory usage when using Pages 2.0.2

    Hey there,
    I was just wondering whether there had been any other reports of unusually high memory usage when using Pages 2.0.2, specifically Virtual memory. I am running iWork 06 on the Mac listed below and Pages has been running really slowly recently. I checked the Activity Monitor and Pages is using hardly any Physical memory but loads of Virtual memory (so much so that the Page outs are almost as high as the Page ins (roughly 51500 page ins / 51000 page outs).
    Any known problems, solutions or comments for this problem? Thanks in advance

    I don't know if this is specifically what you're seeing, but all Cocoa applications, such as Pages, have an effectively infinite Undo. If you have any document that you've been working on for a long time without closing, that could be responsible for a large amount of memory usage.
    While it's good practice to save on a regular basis, if you're making large amounts of changes it's also a good idea to close and reopen your document every once in awhile, simply to clear the undo. I've heard of some people seeing sluggish behavior after working on a document for several days, which cleared up when the document was closed and reopened.
    Titanium PowerBook   Mac OS X (10.4.8)  

  • When opening an animated gif, lots of memory consumed followed by crash

    Any large (read: anything more than a megabyte) animated GIF causes Firefox to consume all available memory, and then subsequently, crash.
    I've tried the following:
    - Disable all extensions
    - Disable hardware acceleration
    - Run in safemode
    - New Firefox installation
    - Set affinity of Firefox to only use one core
    With no effect.
    I am a long time Firefox user and enthusiast, but I am really considering making the transition to Chrome (which does not crash with long animated GIFs)
    There is no option to submit a crash report about it, and there is also no crash reports on my local system AT ALL.
    The following is a short video demonstrating the memory leak.
    https://www.youtube.com/watch?v=mIS8baOSwJ0&feature=youtu.be

    After some further reading, I set the preference image.mem.decodeondraw=false in about:config. Firefox struggled with the GIF, not always able to display anything useful in the window, but did not crash. Clicking the Home button to navigate the tab away allowed Firefox to unload memory. Not an improvement, really.
    Doing a little more searching, I discovered that the file you linked was the demo for this yet-to-be-fixed bug: [https://bugzilla.mozilla.org/show_bug.cgi?id=523950 523950 – Long animated GIF makes Firefox consume all available memory].
    It's generally not helpful to add comments to bugs (unless you can help fix them), but you can register on the Bugzilla site and "vote" for them to be fixed. See:
    * [https://bugzilla.mozilla.org/page.cgi?id=etiquette.html Bugzilla Etiquette]
    * [https://bugzilla.mozilla.org/page.cgi?id=voting.html Voting]

  • Hyper-V Dynamic memory, Driver Locked

    Hello All,
    I've been wondering about the following.
    I have 2 Dell R910 (Windows Server 2008R2 SP1) machines as my Hyper-V host machines, with around 40-50 VMs running on it. For most of my servers I've enabled dynamic memory, but on some of these machines I've seen the following:
    For some reason this machine running Remote Desktop Services (with Web Access) is using around 3.8GB of memory.
    Processes list only a fraction of the actual Memory being used.
    But RAMmap shows that 2.5GB is being used by Kernel Drivers. From what I've read this might be the balooning effect of the dynamic memory. However, the RAM usage was the same yesterday evening (12h before I took these screenshots), when no one was actually
    working on this server.
    I've seen this happen on multiple of my guest machines, the machines are running Windows Server 2008 R2 SP1 Datacenter Edition.
    Can anyone explain to me why it is doing this?
    Thank you.
    Kind Regards,
    Tom

    https://blogs.technet.com/b/vm/archive/2011/01/13/hyper-v-r2-service-pack-1.aspx
    How does a virtual machine memory is taken ?
    The
    method of " balloon " - in the guest OS specific driver begins to consume the allocated memory , taking it to the OS in a VM could not refer to it , and in fact , giving the allocated memory hypervisor for other virtual machines.
    The guest OS continues to believe that her "a lot" of memory, it just kind of busy process and marked as «driver
    locked». When subsequent addenda to the memory of this virtual machine
    will be added to the memory address space of the process - and released them for the needs of the OS.
    Have a nice day !!!

  • Safari Web Content consuming all my memory

    I'm having a problem with Safari Web Content.
    I've jst re-loaded Mountain Lion in a brute forcve attempt to resolve my issue, but it's still occurring.
    The issue is:  with very little running, if I run Safari and bounce around a few websites, I can easily consume all my memory.  I end up with this:
    As you see, Active memory is too high (before this test I had Mail open, and Avtive memory was about 2GBm what you might expect normally).
    Actibve memory had actually peaked at about 3/4 of the total, but then some must have got swapped out to Inactive.....giving us the picture above.
    But look at the Real Mem occupied by Safai Web Content!  That can't be right, surely.
    Nevertheless, I'm led to believe that OSX's memory management algorithm's will swap out Inactive for Free when it is required....well, this isn't happening.  I saw Free memory down to 7MB at one point, not a typo.  Everything grinds to a halt.
    If I am patient, and try to Quit Safari, I end up back here:
    Clearly showing that if I manage to kill Safari, the memory gets freed up again.
    I understand why OSX wants to handle memory with the Inactive concept - "you've just closed Word, but maybe you'll open it again in the next ten minutes so we'll leave it in memory, inactive, so when you try and open it, it will open really quickly".  Right now, I'd rather turn that freature off and have memory behave more primitively and take the hit on re-opening.
    As I mentioned, I reloaded OSX Mountain Lion and the problem persists.  When you re-load or upgrade your OS, there are many configuration settings (desktop, mail, safari) that are kept so that the upgrade looks seamless;  clearly whatever is corrupting me here has been kept also.  For that reason I'm loathe to go to Mavericks, because I presume the same corruption would exist (and also I hear Mavericks is still a bit shaky with the native Mail app).
    What config type files should I look to safely delete and have the OS rebuild?  Any ideas?

    OK, so I tried a Safe Boot.  It wired 5GB of memory, and for a while I thought what was left would behave itself, but within about 10 minutes I consumed everything again.
    Here's the before:
    And here's 10 minutes later, after bouncing around a few websites:
    I also tried firing up Safari, going to Facebook, and letting it sit for a while.  No effect.
    So, the problem occurs when visiting multiple sites - memory isn't getting released properly.
    I think I'm left with a move to Mavericks.

  • Dynamic Memory on Linux VM

    Hello!
    Hyper-V 3.0 is great! After it will be released, I think it will become the most popular hypervisor. But it remains a major drawback.
    Nowhere announced support for dynamic memory for Linux VM on Hyper-V.
    Planned at least in some perspective to implement this functionality?
    Now we have to use two different hypervisors, as Hyper-V does not meet all the requirements of our customers.
    Mark Tepterev
    Oversun

    ~
    ~
    Moved  P.P.P.S.
    Q from Brian Wong:
    ----- Original Message -----
    From: "Brian Wong"
    To: <[email protected]>
    Sent: Thursday, March 06, 2014 9:24 AM
    Subject: Re: Linux does not use more than the startup RAM under Hyper-V with dynamic memory enabled
    On 3/6/2014 1:20 AM, Brian Wong wrote:
    > The kernel is built with the full set of Hyper-V drivers, including the
    > key "Microsoft Hyper-V Balloon Driver" as well as memory hot-add and
    > hot-remove functionality. This is happening with both the Gentoo-patched
    > 3.10.32 kernel and the vanilla 3.12.5 kernel. The host machine has a
    > total of 24 GB of memory.
    >
    > For now, I am working around the issue by starting the VM with the
    > startup memory set to the maximum and letting Hyper-V take the usused
    > memory back when it is not in use. The VM will then get the extra memory
    > when it needs it.
    >
    > Have I encountered a bug in the Hyper-V balloon driver?
    >
    Just a correction: the vanilla kernel version is 3.13.5, not 3.12.5.
    Sorry for any confusion.
    Brian Wong
    http://www.fierydragonlord.com
    ----- Original Message -----
    From: "Brian Wong"
    To: <[email protected]>
    Sent: Thursday, March 06, 2014 9:20 AM
    Subject: Linux does not use more than the startup RAM under Hyper-V with dynamic memory enabled
    I'm new to LKML, so please don't be too hard on me :)
    I'm running Gentoo Linux under Microsoft Client Hyper-V on Windows 8.1
    Pro, and I've noticed some odd behavior with respect to dynamic memory
    (aka memory ballooning). The system will never use more than the startup
    memory defined in the virtual machine's settings.
    ( VVM: typewriting error viRtual fixed by me, for best search in future )
    For example, if I set the startup memory to 512 MB, and enable dynamic
    memory with a minimum of 512 MB and a maximum of 8192 MB, the system
    will never allocate than 512 MB of physical memory, despite Hyper-V
    assigning more memory to the VM and the added memory being visible in
    the output of "free" and "htop". Attempting to use more memory causes
    the system to start paging to swap, rather than actually allocating the
    memory above the startup memory assigned to the VM.
    The kernel is built with the full set of Hyper-V drivers, including the
    key "Microsoft Hyper-V Balloon Driver" as well as memory hot-add and
    hot-remove functionality. This is happening with both the Gentoo-patched
    3.10.32 kernel and the vanilla 3.12.5 kernel. The host machine has a
    total of 24 GB of memory.
      Brian Wong wrote On 3/6/2014 1:20 AM:
     Just a correction: the vanilla kernel version is 3.13.5, not 3.12.5. )
    For now, I am working around the issue by starting the VM with the
    startup memory set to the maximum and letting Hyper-V take the usused
    memory back when it is not in use. The VM will then get the extra memory
    when it needs it.
    Have I encountered a bug in the Hyper-V balloon driver?
    Brian Wong
    http://www.fierydragonlord.com
    ----- Original Message -----
    From: "Victor Miasnikov"
    To:  [email protected]; "Brian Wong"
    Cc: "Abhishek Gupta (LIS)" ( zzzzzzzzzzzzzzz (at) microsoft.com>; "KY Srinivasan" zzzzzzzzzzz (at) microsoft.com
    Sent: Thursday, March 06, 2014 1:07 PM
    Subject: Re: Linux does not use more than the startup RAM under Hyper-V with dynamic memory enabled RE: [PATCH 2/2]
    Drivers: hv: balloon: Online the hot-added memory "in context" Re: [PATCH 1/1] Drivers: hv:
    Hi!
     Short:
     Question to Linux kernel team:
    may be patch
    >>> [PATCH 2/2] Drivers: hv: balloon: Online the hot-added memory "in context"
    can solve problems with dynamic memory hot add in Hyper-V VMs with Linux OS ?
     Full:
    BW>> .., if I set the startup memory to 512 MB, and enable dynamic
    BW>> memory with a minimum of 512 MB and a maximum of 8192 MB,
    BW>>  the system will never allocate than 512 MB of physical memory
    BW>>
    BW>> Have I encountered a bug in the Hyper-V balloon driver?
    BW>>
     Unfortunately,  It's long story . . . :-(
    a)
     I already ( on January 09, 2014 2:18 PM )  write about problems with "Online the hot-added memory"  in "user space" see
    P.P.S.
    b)
      See
    Bug 979257 -[Hyper-V][RHEL6.5][RFE]in-kernel online support for memory hot-add
    https://bugzilla.redhat.com/show_bug.cgi?id=979257
     (  Info from this topic may be interessant not only for RedHat users )
    b2)
     Detail about pathes related problem "Online the hot-added memory"  in "user space" :
    >>> [PATCH 2/2] Drivers: hv: balloon: Online the hot-added memory "in context"
    >>>
    >>>
    >>> === 0001-Drivers-base-memory-Export-functionality-for-in-kern.patch
    >>>  . . .
    >>> +/*
    >>> + * Given the start pfn of a memory block; bring the memory
    >>> + * block online. This API would be useful for drivers that may
    >>> + * want to bring "online" the memory that has been hot-added.
    >>> + */
    >>> +
    >>> +int online_memory_block(unsigned long start_pfn) {  struct mem_section
    >>> +*cur_section;  struct memory_block *cur_memory_block;
    >>>
    >>>  . . .
    >>> ===
    >>>
    >>>
    >>> ==
    >>>  . . .
    >>> == 0002-Drivers-hv-balloon-Online-the-hot-added-memory-in-co.patch
    >>>   . . .
    >>>    /*
    >>> -   * Wait for the memory block to be onlined.
    >>> -   * Since the hot add has succeeded, it is ok to
    >>> -   * proceed even if the pages in the hot added region
    >>> -   * have not been "onlined" within the allowed time.
    >>> +   * Before proceeding to hot add the next segment,
    >>> +   * online the segment that has been hot added.
    >>>     */
    >>> -  wait_for_completion_timeout(&dm_device.ol_waitevent, 5*HZ);
    >>> +  online_memory_block(start_pfn);
    >>>
    >>>   }
    c)
      Before apply patches ( see in P.S. about native udev-script) we are need use one of this methods:
     [ VVM:   URL of this topic skipped ]
    c1)
    "/bin/cp method" by Nikolay Pushkarev :
    Following udev rule works slightly faster for me (assuming that memory0 bank always in online state):
    SUBSYSTEM=="memory", ACTION=="add", DEVPATH=="/devices/system/memory/memory[1-9]*",
    RUN+="/bin/cp /sys$devpath/../memory0/state /sys$devpath/state"}}
    ( VVM : of course all need be place in one line, 2 line in this msg. -- only for good visual formating reasons )
    c2)
    udev rule using "putarg" by Nikolay Pushkarev :
     Even "/bin/cp method" udev rule work time to time not as need :-(
    As a result, Nikolay Pushkarev write a program putarg.c, that is even faster than "/bin/cp method" :
    ==
    #include <stdio.h>
    #include <fcntl.h>
    #include <sys/ioctl.h>
    #include <string.h>
    int main(int argc, char** argv) {
      int i, fd;
      if (argc < 2) return 0;
      if ((fd = open(argv[1], O_RDWR)) < 0) return 1;
      for (i = 2; i < argc; i++) {
        if (write(fd, argv[i], strlen(argv[i])) < 0) {
          close(fd);
          return i;
      close(fd);
      return 0;
    ==
     The first argument - the name of the output file ,
    and argument number 2 ( and all subsequent (  if exist ) ) - are text that are wiil be written in output file.
     Compile source code to executable file by run command:
    gcc -o putarg -s putarg.c
    The resulting binary need be placed an accessible location , such as /usr/bin, or wherever you want.
    Now udev rule using "putarg" can be written as :
    SUBSYSTEM=="memory", ACTION=="add", RUN+="/usr/bin/putarg /sys$devpath/state online"
    This complex solutions ( compiled file and udev-rule ) works exceptionally fast.
    Best regards, Victor Miasnikov
    Blog:  http://vvm.blog.tut.by/
    P.S.
    Nikolay Pushkarev about standart udev-script :
    Strange, that the native udev-script
    SUBSYSTEM=="memory", ACTION=="add", ATTR{state}="online"
    triggered somehow through time ( VVM: very often not work as need )
    P.P.S.
    ----- Original Message -----
    From: "Victor Miasnikov"
    To: "Dan Carpenter"; "K. Y. Srinivasan" ; <[email protected]>
    Cc: "Greg KH" ; <[email protected]>; <olaf (at) aepfle.de>; ""Andy Whitcroft"" <zzzzzzzzzzzz (at)
    canonical.com>;
    <jasowang (at) redhat.com>
    Sent: Thursday, January 09, 2014 2:18 PM
    Subject: RE: [PATCH 2/2] Drivers: hv: balloon: Online the hot-added memory "in context" Re: [PATCH 1/1] Drivers: hv:
    Implement the file copy service
    Hi!
    > Is there no way we could implement file copying in user space?
      For "file copy service"  "user space"  may be pretty good
    But I ( and other Hyper-V sysadmin)  see non-Ok ( in "political correct" terminalogy) results with "hv: balloon: Online
    the hot-added memory" in "user space"
    ==
     [PATCH 2/2] Drivers: hv: balloon: Online the hot-added memory "in context"
    ==
     What news?  Roadmap?
    Best regards, Victor Miasnikov
    Blog:  http://vvm.blog.tut.by/
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ----- Original Message -----
    From: "KY Srinivasan"
    To: "Victor Miasnikov"; [email protected]; "Brian Wong"
    Cc: "Abhishek Gupta (LIS)"
    Sent: Thursday, March 06, 2014 1:23 PM
    Subject: RE: Linux does not use more than the startup RAM under Hyper-V with dynamic memory enabled RE: [PATCH 2/2]
    Drivers: hv: balloon: Online the hot-added memory "in context" Re: [PATCH 1/1] Drivers: hv:
    > -----Original Message-----
    > From: Victor Miasnikov
    > Sent: Thursday, March 6, 2014 3:38 PM
    > To: [email protected]; Brian Wong
    > Cc: Abhishek Gupta (LIS); KY Srinivasan
    > Subject: Re: Linux does not use more than the startup RAM under Hyper-V
    > with dynamic memory enabled RE: [PATCH 2/2] Drivers: hv: balloon: Online
    > the hot-added memory "in context" Re: [PATCH 1/1] Drivers: hv:
    >
    Victor,
    I will try to get my in-context onlining patches accepted upstream.
    K. Y

  • Dynamic Memory in the production environment

    Hi,
    We configure all production VMs with static memory  on Hyper-v 2012
    we need to make sure there is no negative impact on VM performance if we use Dynamic memory , as we read for 2 years
    not recommend to use dynamic memory in the production.
    I preferred the answer with Trusted URL
    Ramy

    Hiya,
    The best answer you can get is, that it is dependent on the application running on the server O/S.
    The reason that dynamic memory is not recommended for production environments, is that many applications are not supporting it. That is usually seen on memory intensive applications(SQL is an example) or simply because of caching types of functions. (SharePoint
    is an example).
    The major concern here is when VM's are decreasing the memory for a VM, the application does not understand this. Usually it is not a problem when increasing the memory. - Again it will depend on the application.
    Think of it as hot swap memory and how you used to use that in the physical machine days.
    Also when there is not adequate memory, operating system will use paging. Paging uses disks and disks has a lower access time than memory.
    In general dynamic memory is easier to control than dynamically expanding disks, as you can set the buffer size of the memory, which is still not available for the dynamic disks.
    Besides the above, the following links states:
    "Workloads that are not NUMA-aware will not take advantage of virtual NUMA. However, the guest operating system may perform some NUMA optimization. Enabling Dynamic Memory (therefore presenting only a single virtual NUMA) should not cause performance
    degradation"
    http://technet.microsoft.com/en-us/library/dn282282.aspx

  • Dynamic Memory is not working all the time

    We are in the process off moving our 2008R2 VM's from the 2008R2 HyperV servers to new Server 2012R2 Hosts.
    We shut down the VM's copy the files and VHD's to the new CSV's en import the VM in the Hyperv Manager. Then we make them high available in the Failover Cluster Manager (Configure role - Virtual machine). We mount the integration tools and update the
    VM to version 6.3.9600.16384
    For a specific type of VM (mostly RDS Host servers) we always had Dynamic Memory configured (when they were hosted on de 2008R2 platform), so we are using the same settings on the 2012r2 platform. The memory settings were;
    Startup memory: 1024 MB
    Minimum memory: 1024 MB
    Maximum memory: 12288 MB
    These VM's reboot every morning, this is done for specific reasons. But now once in a while (once per week/2 weeks) we notice that the VM's are not using more memory then 1024 MB while the demand is much higher. Rebooting the server helps most of the times,
    live migrating to another host also helps. In the VM we see that memory usage in the taskmanager is 99-100%, and after the move it immediately starts using more than the minimum configured amount.
    Until the failover the memory usage was 1024 MB and it did not get any higher.
    This happened several times. Last week we changed the Memory configuration to:
    Startup memory : 2048 MB
    Minimum memory: 2048 MB
    Maximum memory: 12288 MB
    But this morning we had a call about the performance of one of the VM's, We saw that it was only using 2 GB memory while the demand was much higher. After live migrating it to another host it started using more memory immediately.
    The 2012R2 hosts are not overcommited, there is a lot of memory still available for the VM's. Those VM's never had this problem on the 2008R2 Hyperv platform.
    Any idea why this happens?
    Peter Camps

    Peter,
    I think this is a bug of some sort. I say that because the components that make up dynamic memory are as follows.
    Memory Balancer(Host service, coordinates how memory changes are made.) This is also what shows the memory demand counter i believe.
    Dynamic Memory Virtualization Service Provider (this is included your VMWP.exe proccess, one per VM. Essentially how it runs on the host. He listens to the Service Client for metrics)
    Dynamic Memory Virtualization Service Client (this is inside the VM and reports to the Dynamic Memory Virtualizaton Service Provider.)
    Since you live migrated the machine it made dynamic memory work on the other host. This means the Service Client is running in the client and shouldn't be an issue. The Memory Balancer is the server and shouldn't be the issue, so this means the "Dynamic
    Memory Virtualization Service Provider" is in question. When you live migrate the machine its going to create a new VMWP.exe process on the clustered server. So now the question is it the host that couldn't listen to the service or the worker process
    skipped a beat and has a bug.
    Out of curiosity does it happen to both hosts? Also have you profiled the servers to see how much memory they really require on start-up? When you reboot the RDS servers, how many VM's do you reboot and is it a staggered process?

  • Templates and Dynamic Memory Allocation Templates

    Hi , I was reading a detailed article about templates and I came across the following paragraph
    template<class T, size_t N>
    class Stack
    T data[N]; // Fixed capacity is N
    size_t count;
    public:
    void push(const T& t);
    };"You must provide a compile-time constant value for the parameter N when you request an instance of this template, such as *Stack<int, 100> myFixedStack;*
    Because the value of N is known at compile time, the underlying array (data) can be placed on the run time stack instead of on the free store.
    This can improve runtime performance by avoiding the overhead associated with dynamic memory allocation.
    Now in the above paragraph what does
    "This can improve runtime performance by avoiding the overhead associated with dynamic memory allocation." mean ?? What does template over head mean ??
    I am a bit puzzled and i would really appreciate it if some one could explain to me what this sentence means thanks...

    The run-time memory model of a C or C++ program consists of statically allocated data, automatically allocated data, and dynamically allocated data.
    Data objects (e.g. variables) declared at namespace scope (which includes global scope) are statically allocated. Data objects local to a function that are declared static are also statically allocated. Static allocation means the storage for the data is available when the program is loaded, even before it begins to run. The data remains allocated until after the program exits.
    Data objects local to a function that are not declared static are automatically allocated when the function starts to run. Example:
    int foo() { int i; ... } Variable i does not exist until function foo begins to run, at which time space for it appears automatically. Each new invocation of foo gets its own location for i independent of other invocations of foo. Automatic allocation is usually referred to as stack allocation, since that is the usual implementation method: an area of storage that works like a stack, referenced by a dedicated machine register. Allocating the automatic data consists of adding (or subtracting) a value to the stack register. Popping the stack involves only subtracting (or adding) a value to the stack register. When the function exits, the stack is popped, releasing storage for all its automatic data.
    Dynamically allocated storage is acquired by an explicit use of a new-expression, or a call to an allocation function like malloc(). Example:
    int* ip = new int[100]; // allocate space for 100 integers
    double* id = (double*)malloc(100*sizeof(double)); // allocate space for 100 doublesDynamic storage is not released until you release it explicitly via a delete-expression or a call to free(). Managing the "heap", the area from where dynamic storage is acquired, and to which it is released, can be quite time-consuming.
    Your example of a Stack class (not to be confused with the program stack that is part of the C or C++ implementation) uses a fixed-size (that is, fixed at the point of template instance creation) automatically-allocated array to act as a stack data type. It has the advantage of taking zero time to allocate and release the space for the array. It has the disadvantages of any fixed-size array: it can waste space, or result in a program failure when you try to put N+1 objects into it, and it cannot be re-sized once created.

Maybe you are looking for

  • "iTunes was unable to verify your device" error

    Hi, I have a first generation iPad that has been working just fine. I decided to give it away, and so I wiped it and set about restoring it to factory defaults.... except, now I keep getting the error "iTunes was unable to verify your device. Please

  • Browser display problem in firefox

    After reinstall Vibe 3.3 to new server. Browser display problem in firefox. Last column only display half in the table. But IE is display fine. Any suggestion?

  • Logging in on SQL Plus

    I have yet been able to log in to SQL Plus. I just installed 8i Personal. I created a DB called "SCOTTSDB". After creating it, I wanted to login to SQL Plus. I tried user:system password:manager, user:internal password:oracle, and user:sys password:c

  • Automatic output by MB1B

    Hi, When I'm doing MB1B with movement type 351 there is automatic output from printer. I want to cancel the automatic output, how can I do it ? Thank you.

  • Radio button in the Inner tr:table is not working??

    hi guys, I am trying with Nested <tr:table> structure. Like, trying to displaying a table inside another table. In both Inner & outer table I have set the rowSelection=single, value for selectionListener, and autoSubmit=true. the problem that i am fa