Is SL giving maximum priority on memory usage to disk cache???

Pretty much every time I need to work with large files - be it video files, archiving or unarchiving large files, or working with VMWare images - the system quickly runs out of memory and starts swapping profusely. The swap file grows to 2-4 gigabytes pretty quickly, the system becomes unresponsive etc...
I have the latest MacBook Pro 2009 with 4G of RAM, and right after system reboot it shows about 2.8G of free memory, and 0 in swap.
Then I start VMWare, for example, which only uses about 500-600 megabytes of RAM. After some time working with it, the amount of free memory in Mac OS X steadily decreases, and then it starts swapping, and the swap keeps growing... The system becomes slower and slower, but yet, Activity Monitor is still showing that VMWare is using about 500-600 megabytes of memory. At the same time it shows about 2 gigabytes of Active memory, and about 1G of inactive memory, and about 800 megabytes of "wired" memory...
Where the **** did all the memory go??? Did system use it all for disk cache, trying to keep all disk images that VMWare is working with, in it at the expense of physical RAM available to other processes? If it is so - that's VERY silly, and while it may give an appearance of "snappiness" with small amount of simple applications that don't do much, it becomes a MAJOR problem for applications that work with large files - iMovie, VMWare, any other media processing application, cause Mac OS X to start swapping and beachballing, and become unresponsive after certain time of work, while the swap file keeps growing, yet the amount of memory used by those applications shown by Activity Monitor staying around 100-500 megabytes.
I've been working with JES Deinterlacer the other day - trying to convert a 1080i/30fps video into a 720p/60fps one. The file was about 23 gigabytes in size. Besides that I've only had Mail and Safari open. Before I started the deinterlacing process, there was about 1.5G of free memory. After it started, the amount has been going down steadily, until the system started swapping and become unusable after awhile. The deinterlacing process also slowed down significantly (my guess is - again - the system tried to give memory needed by JES Deinterlacer to the disk cache for the file it's been processing). There's no point in caching that file at all - there's 0 chances the application will ever need to jump back to the parts of the file it's already processed! Or to read any parts of the new file it's created!
What's other people's experience working with large files in Snow Leopard? Any Final Cut users? Or iMovie? Does your system exhibit the same behavior? Have you tried monitoring memory usage while you work with any of the applications that work with large files?

Having exactly the same issue. However, I cannot even get VMware to start. It gives me a "could not open paging file" message and never opens. I have also had strange random crashes recently where it seems to run out of memory and then the force quick dialogue opens and usually Safari, Entourage, or Aperture are frozen or paused. Really getting to be a big issue.
I contacted VMware and they looked at all the logs, had me try one change to an internal VM file, but without success. They now are suggesting that based on the logs, it is an OSX paging issue and that I need to do an archive and install!!! No way do I want to do that after just migrating to this new machine and getting everything set up so well!
Need some alternative ideas. Actually hoping 10.6.2 comes out and solves it!

Similar Messages

  • Please make the memory usage built-up a top priority to problemsolve.

    Hi, Firefox is my favorite browser on OSX) , but the memory usage is really getting annoying. It's been 2 yrs and I have to restart Firefox every couple of hours to keep the memory usage in check. If i don't do that
    I think this is a serious issue with your browser. None of the other browsers (Safari, Chrome)I use have this problem, however they're not that user-friendly in my opinion.
    Like I said it has been 2 yrs and the forums are filled with this topic, without any real solutions, are there actually people on this problem? It's a pretty big problem, and honestly I don't understand why there's no correspondence from Firefox, that they acknowledge the problem and they're working on it or something.
    Before those 2 yrs, never a hitch. The problem must've been introduced since FF21 or 22, not sure.
    I'm experiencing exactly the same as "Cyberpawz":
    https://support.mozilla.org/en-US/questions/958108?page=6
    I'm a avid Firefox user and just want the voices of the users experiencing this problem to be heard. And ofcourse the problem to be resolved.
    Thanks for listening!

    The memory issues will be real problems, just look at the viewing stats on the thread you linked to
    '' 104 replies, 94 have this problem, 7788 views ''
    The easy problems will have been identified and fixed. The difficult or least common ones need users able to help troubleshoot and most just want to complain not file bugs, (Please note such bugs will likely only be followed up if it is a Firefox bug or a problem with a single popular addon).
    If you follow the advice from ''Cor-el'' and try with a new profile with all plugins disabled you should find the problem goes away.

  • How to specify maximum memory usage for Java VM in Tomcat?

    Does any one know how to setup memory usage for Java VM, such as "-Xmx256m" parameter, in Tomcat?
    I'm using Tomcat 3.x in Apache web server on Sun Solaris platform. I already tried to add the following line into tomcat.properties, like:
    wrapper.bin.parameters=-Xmx512m
    However, it seems to me that this doesn't work. So, how about if my servlet will consume a large amount of memory that exceeds the default 64M memory boundary of Java VM?
    Any idea will be appreciated.
    Haohua

    With some help we found the fix. You have to set the -Xms and -Xmx at installation time when you install Tomcat 4.x as a service. Services do not read system variables. Go to the command prompt in windows, and in the directory where tomcat.exe resides, type "tomcat.exe /?". You will see jvm_options as part of the installation. Put the -Xms and -Xmx variables in the proper place during the install and it will work.
    If you can't uninstall and reinstall, you can apply this registry hack that dfortae sent to me on another thread.
    =-=-=-=-=-=
    You can change the parameters in the Windows registry. If your service name is "Apache Tomcat" The location is:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Apache Tomcat\Parameters
    Change the JVM Option Count value to the new value with the number of parameters it will now have. In my case, I added two parameters -Xms100m and -Xmx256m and it was 3 before so I bumped it to 5.
    Then I created two more String values. I called the first one I added 'JVM Option Number 4' and the second 'JVM Option Number 5'. Then I set the value inside each. The first one I set to '-Xms100m' and the second I set to '-Xmx256m'. Then I restarted Tomcat and observed when I did big processing the memory limit was now 256 MB, so it worked. Hope this helps!
    =-=-=-=-=
    I tried this and it worked. I did not want to have to go through the whole reinstallation process, so this was best for me.
    Thanks to all who helped on this.

  • Having a problem with Excessive "modified" memory usage in Win7 x64, upwards of 3.6GB, any suggestions?

    I have 6GB of ram, a fresh install of Windows 7 x64, and the screenshot shows what happens after leaving my PC on for a couple days. (3782+MB being used by modified memory ATM).
    http://wow.deconstruct.me/images/ExcessiveMemory.jpg
    Any ideas on this?
    Edit:
    Added this after first round of suggestions
    http://wow.deconstruct.me/images/NotSoExcessiveMemory.jpg
    This is uptime of around 2 hours.
    The first image is of uptime of around 3-5 days.

    Matthew,
    The only reason why these pages are kept on the modified list indefinitely is because the system doesn't have any available pagefile space left. If you increase the size of the pagefile the system will write most of these pages to disk and then move them from the modified list to the standby list. Standby pages are considered part of "available memory", because they can be reused for some other purpose if necessary.
    Whether this would "fix" the problem or not depends on what the actual problem is. If it's an unbound memory leak then increasing the size of the pagefile will simply allow the system to run longer before it eventually hits the maximum pagefile size limit, or runs out of disk space. On the other hand, if it's a case of some application allocating a lot of memory and not using it for a long time, then increasing the pagefile might be a perfectly valid solution.
    Allowing the system to manage the size of the pagefile actually works well in most cases. Pagefile fragmentation (at the filesystem level) can only occur when the initially chosen size is not large enough and the system has to extend it at run time. For win7 we have telemetry data that shows that even for systems with 1 GB of RAM, less than 0.1% of all boot sessions end up having to extend the pagefile, and this number is even lower for larger amounts of RAM. If you think you are in that 0.1% and your pagefile might be getting fragmented, you can manually increase its minimum size such that the total system commit charge stays below 80% even if you run all your apps at once (80% is the threshold at which the pagefile is automatically extended). This will make sure the pagefile is created once and then stays at the same size forever, so it can't fragment. The maximum size can either be set to the same value as the minimum, or you can make it larger so that the system is more resilient to memory leaks or unexpectedly high loads.
    By the way, Windows doesn't use pagefiles as "extra memory", it uses them as a backing store for private pages, just like regular files are used as a backing store for EXEs/DLLs and memory mapped files. So if the system really has more than enough RAM (like in your second screenshot, where you have 3.6 GB of free pages) you shouldn't see any reads from the pagefile. You can verify this by going to the Disk tab in the resource monitor and looking for any disk IO from pagefile.sys. On smaller systems that don't have an excess of free pages you may see periodic reads from the pagefile, and this is expected because the total amount of data referenced by the OS/drivers/processes is larger than the total RAM. Forcefully keeping all pagefile-backed pages in memory (which is what disabling the pagefile does) would simply mean some other pages (memory mapped files, DLL code or data etc) would have to be paged out.
    Regarding further troubleshooting steps: If the system runs fine with a larger pagefile (commit charge stabilizes well below 80%, and you no longer see gigabytes of modified pages accumulating in memory) then you don't really need to do anything. If the problem persists, you can check for any processes with an abnormally high commit charge, and also check kernel memory usage in task manager. If it's a kernel leak you can usually narrow it down to a particular driver using poolmon.exe or kernel debugger.

  • SA520W - High memory usage, possible fix in 2.2.0 firmware?

    As suggested by Thomas Watts, I'm starting a new thread to discuss the new SA520W firmware (2.2.0) and a possible resolution to high memory usage I'm experiencing on my network.
    My current setup is: 16Mbit DSL > SA520W > SA300-10, all with stock settings (no fancy VLAN's etc.)
    I have 4 CentOS 5/6 servers and a Windows 7 Ultimate station connected to the switch. I use CIFS to connect from Windows station to the other Linux servers and send large files. I currently notice the following behavior:
    When the file transfer starts, the Intel 1Gbit NIC is nearly saturated, hitting 115MB/sec. After few seconds, the data transfer comes to a halt and the transfer speed drops to around 50MB/sec. If I check the memory usage before the file transfer, it is approximately to 50-60% (on a fresh router reboot). Every time I send large files to other machines, the router memory consumption increases and it does not lower after a reasonable delay. I end-up with high memory near 90% and the only solution I have is to reboot the router in order to bring it back to 50%.
    Now, Thomas told me that this is simply a cosmetic issue, the memory is not actually 90% used. Yet, when the memory hits this threshold, I'm not capable to send files are normal LAN speeds I'm used to. Rebooting the router allows me to send only ONCE (and for few seconds) data at the expected LAN speeds.
    I would apreciate any input from Cisco engineers as well other users who experience the same issue. I would also like to know if any related work was done into 2.2.0 firmware and when we expect to have it released to users.
    Regards,
    Floren Munteanu

    Hi Tom,
    See below the answers.
    Are you currently running the 2.1.71 code?
    Yes
    Are you using IPS?
    No, the LAN is for internal use (no external users allowed)
    Are you using Protectlink services?
    No
    Hardware wise, I did not changed anything on machines. All boxes have dual Intel EXPI9301CT NIC's (LACP was planned) but I currently use single connections for sanity reasons (disks won't allow greater speeds anyway). Previous to Cisco, I used a Netgear ProSafe router + switch which did not encountered the issues I mention. Honestly, at first I thought I'm dealing with some stupid disk issues on Windows. So I ran a quick test and the stats are proper:
    > winsat disk -drive c
    > Disk  Sequential 64.0 Read                   109.62 MB/s        6.5
    > Disk  Random 16.0 Read                       2.47 MB/s          4.4
    > Responsiveness: Average IO Rate              2.12 ms/IO         6.9
    > Responsiveness: Grouped IOs                  8.34 units         7.4
    > Responsiveness: Long IOs                     5.59 units         7.7
    > Responsiveness: Overall                      46.63 units        7.1
    > Responsiveness: PenaltyFactor                0.0
    > Disk  Sequential 64.0 Write                  117.03 MB/s        6.7
    > Average Read Time with Sequential Writes     6.977 ms           5.3
    > Latency: 95th Percentile                     32.720 ms          3.0
    > Latency: Maximum                             118.231 ms         7.6
    > Average Read Time with Random Writes         13.346 ms          3.7
    > Total Run Time 00:01:39.50
    As I mentioned before, everything is pretty much stock on router/switch settings. If you have any tips that allow me to identify the cause, I would appreciate the input. What puzzles me is the speed drop and quick memory usage increase. It occurs 7-10 seconds after the transfers begins. It looks like the data transfer hangs for a very short period of time (less than half of second) and the transfer speed decreases from 110-115MB/sec to 50-60MB/sec. The transfer is completed at this speed. No matter how many other files I try to transfer after, the speed won't go higher than 60MB/sec. If I reboot the router, I get the same cycle.

  • High memory usage OSX Lion on iMac

    Hi,
    Recently upgraded 2006 iMac 6.1 to OSX 10.7.2 (4GB). Noticed performance dropped significantly when 2 or more users are logged in especially when switching users. Observed memory usage for OSX Lion is far higher than Snow Leopard! By startup 2GB real memory already allocated and often down to last 500MB. Performance appear to slow down due to swapping as swap i/o appears to increase.
    On snow leopard I had 6 users logged in and 4GB was plenty. Lion appears to be a MS product!!!!
    Anyone else experienced high memory usage on Lion and any ideas how to reduce memory consumption?
    Unfortunately at the maximum memory capacity for my my iMac so need to find ways to reduce memory usage. There must be a kernel compiler option that could reduce memory.....
    Also considering SSD drive to speed up swap i/o read/writes.
    Otherwise will have to go back to leopard :-(
    Thanks for any help in advance.
    -Dav
    PS> OSX Lion is alot more stable than previous OSX releases with this iMac model. Especially iMacs suffering the notorious NVDIDIA GPU heat problems...

    You mac can handle up to 3gb of ram, but slightly more will be available with 4gb installed.  For Lion to run smooth, a true 4gb of ram is preferred, which may explain the sluggishness of your mac.

  • Network stream fxp excess memory usage and poor performance

    I'm trying to stream some datas à highspeed rate (3 channels à 1Mbytes/s) from my 9030 to my windows host. Because i don't need to use data on the rt side, i choose to forward FXP <+-,24,5> to my host throug a network stream.
    To avoid data loose i choose to use a wide buffer of 6000000 with this buffer my memory usage grow from 441mo to 672Mo and my rio is unable to stream the data. 
    With sgl or double, memory usage is 441 to 491Mo and datas can be streamed continusly.
    Anyone have encoutered this problem?

    SQL Developer is java based and relies on the jvm's memory management.
    I'm not aware of any memory leaks as such, but memory tends not to be returned to the system.
    Queries which return large return sets tend to use a lot of memory (SQL Developer has to build a java table containing all the results for display).
    You can restrict the maximum memory allocated by modifying settings in in <sqldeveloper>\ide\bin\ide.conf
    The defaults are -
    AddVMOption -Xmx640M
    AddVMOption -Xms128M

  • High Eden Java Memory Usage/Garbage Collection

    Hi,
    I am trying to make sure that my Coldfusion Server is optomised to the max and to find out what is normal limits.
    Basically it looks like at times my servers can run slow but it is possible that this is caused by a very old bloated code base.
    Jrun can sometimes have very high CPU usage so I purchased Fusion Reactor to see what is going on under the hood.
    Here are my current Java settings (running v6u24):
    java.args=-server -Xmx4096m -Xms4096m -XX:MaxPermSize=256m -XX:PermSize=256m -Dsun.rmi.dgc.client.gcInterval=600000 -Dsun.rmi.dgc.server.gcInterval=600000 -Dsun.io.useCanonCaches=false -XX:+UseParallelGC -Xbatch ........
    With regards Memory, the only memory that seems to be running a lot of Garbage Collection is the Eden Memory Space. It climbs to nearly 1.2GB in total just under every minute at which time it looks like GC kicks in and the usage drops to about 100MB.
    Survivor memory grows to about 80-100MB over the space of 10 minutes but drops to 0 after the scheduled full GC runs. Old Gen memory fluctuates between 225MB and 350MB with small steps (~50MB) up or down when full GC runs every 10 minutes.
    I had the heap set to 2GB initally in total giving about 600MB to the Eden Space. When I looked at the graphs from Fusion Reactor I could see that there was (minor) Garbage Collection about 2-3 times a minute when the memory usage maxed out the entire 600MB which seemed a high frequency to my untrained eye. I then upped the memory to 4GB in total (~1.2GB auto given to Eden space) to see the difference and saw that GC happened 1-2 times per minute.
    Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often? i.e do these graphs look normal?
    Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Any other advice for performance improvements would be much appreciated.
    Note: These graphs are not from a period where jrun had high CPU.
    Here are the graphs:
    PS Eden Space Graph
    PS Survivor Space Graph
    PS Old Gen Graph
    PS Perm Gen Graph
    Heap Memory Graph
    Heap/Non Heap Memory Graph
    CPU Graph
    Request Average Execution Time Graph
    Request Activity Graph
    Code Cache Graph

    Hi,
    >Is it normal in Coldfusion that the Eden memory would grow so quickly and have garbage collection run so often?
    Yes normal to garbage collect Eden often. That is a minor garbage collection.
    >Also should I somehow redistribute the memory available to give the Eden memory more since it seems to be where all the action is?
    Sometimes it is good to set Eden (Eden and its two Survivor Spaces combined make up New or Young Generation part of JVM heap) to a smaller size. I know your thinking - what make it less, but I want to make it bigger. Give less a try (sometimes less = more, bigger not = better) and monitor the situation. I like to use -Xmn switch, some sources say to use other method/s. Perhaps you could try java.args=-server -Xmx4096m -Xms4096m -Xmn172m etc. I better mention make a backup copy of jvm.config before applying changes. Having said that now you know how you can set the size to bigger if you want.
    I think the JVM is perhaps making some poor decisions with sizing the heap. With Eden growing to 1Gb then being evacuated not many objects are surviving and therefore not being promoted to Old Generation. This ultimately means the object will need to be loaded again latter to Eden rather than being referenced in the Old generation part of the heap. Adds up to poor performance.
    >Any other advice for performance improvements would be much appreciated.
    You are using Parallel garbage collector. Perhaps you could enable that to run multi-threaded reducing the time duration of the garbage collections, jvm args ...-XX:+UseParallelGC -XX:ParallelGCThreads=N etc where N = CPU cores (eg quad core = 4).
    HTH, Carl.

  • About the limitation of memory usage of Illustrator.

    Please tell us about the behavior of the illustrator CS5/CS6 when memory usage of illustrator is too many.
    The memory usage of Illustrator is increased, I think the application operates as follow.
    - Warning display (e.g.:Illustrator can not preview)
    - Illustrator stalls
    1.Such as case, Is there memory usage limit?
    2.In the case of memory usage of Illustrator exceed maximum size, how Illustrator works?
      Is there a specification?
    3.Is there the function of the memory auto-release in Illustrator?When the function work?

    Thanks.  Finally found where it is and turned off all alerts.  Verizon doesn't make it easy to find where it is located.  Appreciate the help.

  • Nested tables and memory usage (ORA-04030 error)

    Dear All,
    I have table with approximately 5,000,000 records
    and try to Bulk Collect part of it into nested table in PL/SQL, the code is bellow
    Declare
         Type TcardRec Is Record(
              serno Pls_Integer,
              numberx Char(16),
              caccserno Pls_Integer
         Type TcardList Is Table Of TcardRec;
         fcardInfo TcardList;
    Begin
    Select c.serno, substr(c.numberx,1,16), c.caccserno
    Bulk Collect Into fcardinfo
    From cardx c;
    End;
    After reading approx. 80% it fails with error
    ORA-04030: out of process memory when trying to allocate 16396 bytes (koh-kghu call ,pmucalm coll)
    I 2G memory, is it realy no enough?
    How can I tune memory usage for collection?
    How can I estimate the maximum size of the collection the will fit into memory?
    Thank you in advance for any help
    Artem

    Declare it as a cursor.
    Open the cursor.
    Use fetch bulk collect with the limit option in the loop.
    In your case, you could do like:
    Declare
    Cursor c1 is
    Select c.serno, substr(c.numberx,1,16), c.caccserno
    From cardx c;
    TcardList Is Table Of c1%rowtype;
    fcardInfo TcardList;
    Begin
    Open c1;
    Loop
    Fetch c1 Bulk Collect Into fcardInfo Limit 10000;
    Exit when c1%notfound;
    -- Do some processing here.
    End Loop;
    Close c1;
    End;
    I hope this helps.

  • SQL 2012 Memory usage analysis

    We have this clustered SQL Server, win 2008 R2 + SQL 2012 Standard SP2,
    Server RAM is 64GB, maximum memory is set at 50GB, and minimum memory is set at 10GB.
    When I check memory usage, it seems
    CLR takes more memory than Buffer Pool, is it normal? Or does it show memory allocation issue? Thanks a lot!
    The "total server memory" and "target server memory" performance counters are both 50GB.
    Please kindly see below screen shots below:

    Hi Vivian,
    I would assume you are asking question related to SQL Server 2012 memory the whole discussion would change if it is SQL Server 2008 r2 memory
    You are not drawing correct information from first query.
    CLR has Reserved 6 G but ONLY committed 12 MB. And that too you are looking at virtual memory which is 6TB for SQL Server process so of course CLR can reserve there is actually no issue with it. A committed memory is actually what SQL Server is using
    reserved in simple language is what SQL Server thinks it might need in near future.
    As you said Target and total memory are same its good.
    For both 2012 and 2008 r2 below query will give total memory utilization by SQL Server instance
    select
    (physical_memory_in_use_kb/1024)Memory_usedby_Sqlserver_MB,
    (locked_page_allocations_kb/1024 )Locked_pages_used_Sqlserver_MB,
    (total_virtual_address_space_kb/1024 )Total_VAS_in_MB,
    process_physical_memory_low,
    process_virtual_memory_low
    from sys. dm_os_process_memory
    If you want to see Breakdown of memory utilized by various clerks in 2008 r2
    select
    type,
    (SUM(single_pages_kb)/1024) Single_page_allocator_memory,
    (SUM(multi_pages_kb)/1024) multi_page_allocator_memory,
    (sum(awe_allocated_kb)/1024) [AWE API memory]
    from sys.dm_os_memory_clerks
    group by type
    order by Single_page_allocator_memory desc
    For 2012
    select type,
    (SUM(pages_kb)/1024) as Memory_Utilized
    (sum(awe_allocated_kb)/1024) as Memory Allocated by AWE API
    from
    sys.dm_os_memory_clerks
    group by type
    order by Memory_Utilized desc
    Starting from 2012 Max server memory controls much more than buffer pool
    Max server memory controls SQL Server memory allocation, including the buffer pool, compile memory, all caches, qe memory grants, lock manager memory, and CLR memory (basically any “clerk” as found in dm_os_memory_clerks). Memory for thread stacks, heaps,
    linked server providers other than SQL Server, or any memory allocated by a “non SQL Server” DLL is not controlled by max server memory
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • UCCX 7 Heap Memory Usage Exceeded Error

    UCCX 7.0.(1) SR5
    Getting the following error when updating or adding new script applications:
    "It is not recommended to update the application as Engine heap memory usage exceeded configured threshold. Click OK to continue and Cancel to exit."
    Apparently this is an alert that was built into SR4 and is configurable under the System Parameters.
    Does anyone have information on what processes use the heap memory in UCCX or how to monitor the usage?

    As Tom can attest to by now, this is something of an iceberg with big sharp edges below the surface.
    The Java heap is fixed at 256MB on CCX. The Java heap is used by Tomcat as execution memory. In addition to this, applications, scripts, and other repository data is loaded into the heap at runtime. Depending on your environment, you may be approaching the limits of the heap, which cannot be changed. If the heap size is reached, it will be dumped and impact calls.
    What have you been doing as of late on your CCX server? How many applications and scripts do you have? Are any of these using XML files extensively?
    Note there is also a possible bug where the MIVR engine does not properly release all objects loaded into the heap at the end of a script execution leading to a memory leak of sorts. The discussion [debate] over this behavior is continuing. As of this week, it may be represented under
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:10.0pt;
    font-family:"Times New Roman","serif";}
    CSCte49231. If it is, this may qualify as the most poorly described defect ever.

  • Getting memory usage details in ABAP program

    Hello,
    Is there any method to get the memory used by the program and control the program like restricting the memory usage to some limit or any other control measures which can be taken in the program itself if the memory usage by the program exceeds a maximum limit. Or kindly let me know where i can find the details.

    Hi,
    > Is there any method to get the memory used by the program
    investigate the methods of class cl_abap_memory_utilities (e.g. GET_TOTAL_USED_SIZE)
    >and control the program like restricting the memory usage to some limit
    report rsmemory can change the systemwide quotas
    >or any other control measures which can be taken in the program itself if the memory
    >usage by the program exceeds a maximum limit
    needs to be implemented manually if neede program specific...
    Kind regards,
    Hermann

  • What's the recommended setting for "Process memory usage" ("process virtual" in UI) for a 64-bit host on a 64-bit OS?

    Hi gurus
    In resource based throttling, what's the recommended setting for "Process memory usage" ("process virtual" in the resource based throttling tab of the UI) for a 64-bit host
    on a 64-bit Windows OS?
    According to MS (http://msdn.microsoft.com/en-us/library/ee308808(v=bts.10).aspx):
    "By default, the
    Process memory usage throttling threshold is set to 25. If this value is exceeded and the BizTalk process memory usage is more than 300 MB, a throttling condition may occur. On a 32-bit
    server, you can increase the Process memory usage value to 50. On a 64-bit server, you can increase this value to 100. This allows for more memory consumption by the BizTalk process before throttling
    occurs."
    Does this mean that 100 is the recommended setting for a 64-bit host on a 64-bit Windows?
    Thanks
    Michael Brandt Lassen

    Hi Michael,
    Recommended setting is the default setting which is 25 .dot.
    If your situation is abnormal and you see message delivery throttling state to “4” when the following performance counters are high or if you expect any of you integration
    process could have impact on following counters, then you can consider the suggestion by Microsoft. Don’t change the default setting.
    High process memory
    Process memory usage (MB)
    Process memory usage threshold (MB)
    You can see these counters under “BizTalk:MessageAgent”
    You can gauge these performance counter and its maximum values if have done any regression/performance testing in your test servers. If you have seen these counters having
    high values and causing throttling, then you can update the Process memory usage.
    Or unexpectedly you’re process high throughput messages in production which is causing these counters to go high and cause throttling, then up can update the Process memory
    usage.
    The above two cases where I know my expected process usage (by doing performance testing) or suddenly my production server processing has gone high due to unexpected business
    hike (or any reasons) which caused throttling, then do changes to default throttling setting.
    Just changing the default setting without actual reason could have adverse effect where you end up allocating 
    more processing capacities but the actual message processing message usage ever is low means you end up investing in underutilised resources.
    Regards,
    M.R.Ashwin Prabhu
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • How to calculate memory usage base on graphic utilization

    Dear All ,
    We have t2000 server with solaris 10 and 15 zones inside , and install SMC server include module ,Harddware configuration is 16 Gb Memory , 4 x 72 gb Hardisk and Swap 4Gb .,from menu Manage container manager , we select host of the server then click utilization , but i see memory usage 19759 Mb , How to calculate memory from this graph ? cause maximum ream RAM only 16 Gb in our server.
    Regards
    Hadi

    PL/SQL collections are stored in the PGA. So you can monitor the PGA utilization of the session(s) to see how much PGA they use.
    SELECT sid, name, value
      FROM v$statname name
           JOIN v$sesstat using (statistic#)
    WHERE name.name in ('session pga memory', 'session pga memory max' )That will show you, for each session, the current PGA consumed by the session and the high water mark of PGA consumption by that session. You can join to V$SESSION and add additional predicates to narrow things down to the particular sessions you are interested in.
    Justin

Maybe you are looking for

  • Any way to get Illustrator CS5 to display full screen mode like Photoshop?

    Is there any way to get Illustrator's screen modes to behave like Photoshop's screen modes? One of the things I like most about Photoshop is the way it toggles from standard screen mode to "full screen with menu bar" to just "full screen". For exampl

  • With the CS6 Beta, I cannot get into Bridge CS6.  Bridge CS5 keeps coming up instead.

    How do I get Bridge CS6???  Even if I go to the folder with Bridge CS6 and try to activate it from the command line, Bridge CS5 comes up.  What am I doing wrong

  • Moving Bookmarks from one Mac to another

    I have a version of Safari on one of my other Macs that has all my bookmarks. How do I move those to a new one? I guess my question is really WHAT do I move to transfer bookmarks from one computer to another in Safari? Thanks.

  • Send companies thru IDOC

    Hi experts, I'd like to send companies created by OX02 transaction thru IDOC technology. Does someone know how to do that? I looked for a message type, however I didn't find anything regarding companies, closer thing was G/L Accounts. Regards, André

  • I pod touch is unresponsive

    My ipod touch 3 is unresponsive, the screen is black and wont charge. It wont do anything. I need to either turn it back on or somebody tell me how to transfer things stored on my ipod to my computer. I have valuable things stored on it and I need he