How to monitor phyisical memory utilization
Can anyone let me know how to monitor the physical memory utilization using monitors/rules?
The existing monitor which is there in windows server management pack monitors the virtual memory where as i wanted to monitor physical memory
You have to create the monitor yourself.
This monitor is called a Static threshold performance monitor. In the counter list, choose % commited bytes
A walkthrough can be found here:
http://technet.microsoft.com/en-us/library/bb309655.aspx
Juke Chou
TechNet Community Support
Similar Messages
-
How to get the Memory Utilization Data for Cloud Service
Hi,
We are planning to monitor the Performance of Cloud Services hosted onto Azure through VisualStudioOnline [TFS]. However, I couldn't find any performance metric for Memory utilization on individual Cloud Service.
So please help how can we monitor Memory utilization on individual Cloud Service hosted through VSO.
Thanks.
Regards,
Subhash Konduru
Please remember to mark the replies as answers if they help and unmark them if they provide no help.If you are using the VSO then you can take a look at azure application insights which is a service hosted on azure which will help you to detect issues, solve problems and continuously improve your web applications.
Read more about Application insights here -
http://azure.microsoft.com/en-us/documentation/articles/app-insights-get-started/
https://msdn.microsoft.com/en-us/library/dn793604.aspx
Bhushan | Blog |
LinkedIn | Twitter -
How to measure JSP Memory Utilization
I'm trying to build a tool that will tell me how much resources a JSP is consuming. Am using 1.4.2_14. I'm using a static heap size (1GB) and -Xgc:singlepar. I've created a filter that does a Runtime.totalMemory () - Runtime.freeMemory () before and after a chain to the JSP. To test this I built a simple JSP that I call from a shell script with curl:
<%
int alloc = 131065;
if (null != request.getParameter("alloc"))
alloc = Integer.parseInt(request.getParameter("alloc"));
Object[] o = new Object[alloc];
for (int i = 0; i < o.length; i++)
o[i] = new Object ();
if (null != request.getParameter("clean"))
for (int i = 0; i < o.length; i++)
o[i] = null;
o = null;
out.println("Done with " + o.length);
%>
When running this JSP repeatedly starting with a allocation of 131,064 objects I get a heap growth of 0 until I increment to 131,067. Then I seem to get good information but every so often I'll see a 18MB bump in memory. The size I get for heap growth at 131,067 is 512,288 bytes.
Why can't I see any memory utilization below 512KB?
What is this 18MB bump in memory?
Is there a way for me to get a more accurate measurment?
Thanks,
HariIt's possible that the totalMemory() and freeMemory() calls are not 100% exact all the time; I don't remember exactly how that info is gathered.
There is a way to get very exact memory consumption with JR. Mail me for details.
-- Henrik -
How to monitor java memory usage in enterprise manager
I am running sqlplus to execute a sql package, which generates XML.
When processing 2000+ rows, it will give a out of memory error.
Where in enterprise manger can I see this memory usage?
Thanks.Hello,
it depends a little on what you want to do. If you use the pure CCMS monitoring with the table ALTRAMONI you get average response time per instance and you only get new measurements once the status changes from green to yellow or red.
In order to get continuous measurements you should look into Business Process Monitoring and the different documentations under https://service.sap.com/bpm --> Media Libary --> Technical Information. E.g. the PDF Setup Guide for Application Monitoring describes this "newer" dialog performance monitor. Probably you have to click on the calendar sheet in the Media Libary to also see older documents as well. As the Business Process Monitoring integrates with BW (there is also a BI Setup Guide in the Media LIbrary) you can get trendlines there. This BW integration also integrates back with SL Reporting.
Some guidance for SL Reporting is probably given under https://service.sap.com/rkt-solman but I am not 100% sure.
Best Regards
Volker -
How to monitor CPU load/utilization
Hi
We have an application which is deployed in a jboss server on solaris/windows. We need to monitor the system load and if the load exceeds certain threshold value then the application has to send out a notification (warning).Is there any java API available for the same.
We do not intend to use "ManagementFactory.getOperatingSystemMXBean().getSystemLoadAverage()". as this doesn't exactly show the CPU utilization.
We use jdk1.6.
Any help would be appreciatedload average is related to the queue length(prstat), where as CPU utilization is actual percentage of cpu utilization by the all the processes (vmstat).
In the wiki page it was mentioned that load average would consider even the process in waiting state.
[http://en.wikipedia.org/wiki/Load_(computing]
Correct me if my understanding is wrong
- Is there any third party tool which can be used for this purpose?
Edited by: anusha_g on Oct 14, 2009 12:15 AM -
Hi,
on 10g R2,
in documentation (Database 2 Day + Performance Tuning Guide) it is said :
To monitor memory utilization:
1.On the Performance Summary page, from the View list, select Memory Details.On my DBcontrol I do not have the View list to select Memory Details.
Would you confirme please that it is the case for DBcontrol ? And "View list, select Memory Details" is available only in Performance page in Enterprise Manager ?
Is in AWR report any part indicating host memory utilization ?
Thank you.do you use Standard Ed? in such a case AWR/ADDM is unavailable.
If you use Ent.Ed. then see parameters:
timed_statistics TRUE
statistics_level TYPICAL (best)
also see the snapshot interval and retention period:
col snap_interval format a30
col retention format a30
select snap_interval, retention
from dba_hist_wr_control; -
After uploading to 10.9.1 how to I determine memory remaining? It was easy previously in activity monitor.
I see what you mean. I've not been able to discpher that either. I use a Free Memory app that I can use to free up memory from time to time and when Free Memeory gives me 3.42 free Activity Monitor gives me this:
However, since Mavericks dymanically handles the memory the only thing I watch are page outs. If I begin getting page outs then I'm pushing my memory limits and using virtual memory.
OT -
How to monitor memory usage of "Memory-based Snapshot" executable (MRCNSP) in Linux?
We have noticed in the past that MRCNSP/Memory-based Snapshot program executable consumes around 3.8 GB of memory on the linux VM. I understand that value change planning is 32 bit executable so 4 GB is the limit. I want to monitor the memory usage of the executable when the program runs. The program usually runs overnight. I wanted to check with you experts if you have any MRP executable memory usage monitoring script that I could use.
I found the metalink note OS Environment and Compile Settings for Value Chain and MRP/Supply Chain Planning Applications (Doc ID 1085614.1) which talks about "top -d60 >>$HOME/top.txt". Please share your ideas for monitoring this process.
We do not use Demand Planning or Demanta or Advanced Supply Chain Planning which are 64 bit application. That is our future direction.
Environment:
EBS : R12.1.2 on Linux. The concurrent manager is on 64 bit linux VM, web services on 32 bit VMs.
DB: Oracle DB 11.2.0.3 on HP UX Itanium 11.31. Single database instance.
Thanks
Cherrish VaidiyanRAM on the controller is not the same as the C: drive. With respect to the controller, you can think of it in the same terms as your computer. RAM is volatile memory and your C: drive is non volatile flash memory.
Depending on the frequency of the temperature excursions above and below your 70C threshold, the service life of the controller and the method you used to append to the file, there could be a number of issues that may creep up over time.
The first, and the one you brought up is the size of the file over time. Left unchecked this file could grow continuously until the system literally runs out of flash memory space and chokes. Depending on how your are appending data to this file, you could also use more than a trivial slice of processor time to read and write this big file on disk. While I have not personally ever run one of the RT controllers out of "disk space", I can't imagine that any good could come of that.
One thought is to keep a rolling history of say the last 3 months. Each month, start a new file and append your data to it during the course of the month. Each time a new file is created, delete the data file from something like 3 months ago. This will ensure that you will always have the last 3 months of history on the system, however the monthly deletion of the oldest data file will limit you to say 3 files at whatever size they happen to be. Unless there are hundreds of thousands of transitions above and below your threshold this should keep you in good shape.
I also eluded to the method you use to write to this file. I would ensure that you are appending data using the actual file functions and not first reading in the file, appending your data as a string then writing the entire file contents back to disk. In addition to causing the highest load on the file system this method also has the largest system RAM requirements. -
How to find CPU and Memory Utilization
Hi,
How to find CPU and Memory Utilization of my application which is
running in solaris workstation.My application has a management console in which we need to update the cpu and memory periodically.
(Notr : Usage of JNI is restrcited)
Thnx and Rgds,
MohanThere is no CPU usage API in Java. You need some JNI code. For memory usage see the Runtime methods:
* Get information of memory usage: max, total, free and (available/used).
* <ul>
* <li>Max is the maximum amount of bytes the application can allocate (see also java options -Xmx). This value is fixed.
* If the application tries to allocate more memory than Max, an OutOfMemoryError is thrown.
* <li>Total is the amount of bytes currently allocated from the JVM for the application.
* This value just increases (and doesn't decrease) up to the value of Max depending on the applications memory
* requirements.
* <li>Free is the amount of memory that once was occupied by the application but is given back to the JVM. This value
* varies between 0 and Total.
* <li>The used amount of memory is the memory currently allocated by the application. This value is always less or equal
* to Total.
* <li>The available amount of memory is the maximum number of bytes that can be allocated by the application (Max - used).
* </ul>
* In brief: used <= total <= max
* <p>
* The JVM can allocate up to 64MB (be default) system memory for the application. If more is required, the JVM has to be started with the Xmx option
* (e.g. "java -Xmx128m" for 128 MB). The amount of currently allocated system memory is Total. As the JVM can't give back unallocated memory
* to the operating system, Total can just increase (up to Max).
* @return Memory info.
static public String getMemoryInfo() {
StringBuilder sb = new StringBuilder();
sb.append("Used: ");
sb.append(toMemoryFormat(getUsedMemory()));
sb.append(", Available: ");
sb.append(toMemoryFormat(getAvailableMemory()));
sb.append(" (Max: ");
sb.append(toMemoryFormat(Runtime.getRuntime().maxMemory()));
sb.append(", Total: ");
sb.append(toMemoryFormat(Runtime.getRuntime().totalMemory()));
sb.append(", Free: ");
sb.append(toMemoryFormat(Runtime.getRuntime().freeMemory()));
sb.append(")");
return sb.toString();
}//getMemoryInfo() -
How to display CPU and memory utilization from ST06 in a report
Hi,
I want to display CPU Utilization and Memory utilization and File sys details from ST06 transaction in a report.
Is there any function module or any other method to do that.
Please advice.
Thanks,
Sandeep.Hi Ranganath,
Thanks for your time.
And thank you very much for the reply.
Both the function modules are helpful.
But can u also help me in getting the data of FileSys from ST06.
Thankyou,
Sandeep. -
How to monitor memory on Cisco ACE Appliance 4710?
I'm trying to monitor the memory usage in balancers Cisco ACE Appliance 4710 with version A3 (2.2), but the OIDs cpmCPUMemoryUsed (.1.3.6.1.4.1.9.9.109.1.1.1.1.12) and cpmCPUMemoryFree (.1.3.6.1.4.1.9.9. 109.1.1.1.1.13) not work.
What the right OID to monitor memory usage in balancers Cisco ACE 4710 Appliance?HI,
You need to use CISCO-ENHANCED-SLB-MIB .
cpmProcExtMemAllocatedRev .1.3.6.1.4.1.9.9.109.1.2.3.1.1 (this gives the memory allocated to each process)
You can also read up on the mib
Hope this helps
Venky -
How to find unused memory amount, OS x 10.6.8?
How do I find remaining memory capacity on Imac 10.6.8
Hi cpq24tr0,
You can find your memory utilization under "Activity Monitor", there is a memory tab that will detail its usage.
Good luck,
Dr. C. -
Agent resident memory utilization ALERT question
Hi all,
Every night during the incremental db backups (and during the weekly full db backup) i get an alert for the "Agent resident memory utilization" level. I did find this similiar post but it didnt' help me out much (Agent memory utilization under Linux
I want to know if i should be concerned with this alert and how i can tune to get rid of it. Does anyone know how to handle this?? I found the following in the oracle documentation about this alert:
1 Agent
The oracle_emd target is a representation of the Oracle Management Agent. The Oracle Management Agent is the Management Agent used by Oracle Enterprise Manager. This target type exposes useful information required to monitor the performance of the Management Agent.
Most of the help topics in this helpset use the term Management Agent to refer to the Oracle Management Agent.
1.1 Agent Process Statistics
The EMD Process Statistics provides information about the performance and resource consumption of the Management Agent process. This metric is collected by default on an interval of 1038 seconds. A value that can be changed in the default collection for the oracle_emd target.
1.1.1 Agent Resident Memory Utilization (KB)
The amount of resident memory used by the agent and all of its child processes in KB.
I also get nightly alert for Commit waits but i don't think they are related. Can someone please shed some light on this alert for me? Here is the full alert that i'm getting:
Name=localhost.localdomain:3938
Type=Agent
Host=localhost.localdomain
Metric=Resident Memory Utilization (KB)
Timestamp=Sep 13, 2006 2:08:14 AM EDT
Severity=Warning
Message=Agent resident memory utilization in KB is 223304
Rule Name=Agents Unreachable
Rule Owner=SYSMANIf your Agent is really getting up to the Warning or Critical Threshold set for the agent resident memory utilization" metric, you can increase the numbers. by default, the Agent "Resident Memory Utilization (%) is set to 20% (Warning) and 30%(Critical) while "Resident Memory Utilization (KB) is 128000 (Warning) and 256000 (Critical)
From Targets > All Targets > Select the Agent > Click Metric and Policy Settings at the bottom of the screen > Edit the Resident Memory Utilization as required.
Or Instead changing the Threshold, on the same Edit screen, you can simply change the "Number of Occurrences" before an alert is triggered.
Meanwhile it is important to check whether the use of such amount is normal. -
Follow up on an old thread about memory utilization
This thread was active a few months ago, unfortunately its taken me until now
for me to have enough spare time to craft a response.
From: SMTP%"[email protected]" 3-SEP-1996 16:52:00.72
To: [email protected]
CC:
Subj: Re: memory utilization
As a general rule, I would agree that memory utilzation problems tend to be
developer-induced. I believe that is generally true for most development
environments. However, this developer was having a little trouble finding
out how NOT to induce them. After scouring the documentation for any
references to object destructors, or clearing memory, or garbage collection,
or freeing objects, or anything else we could think of, all we found was how
to clear the rows from an Array object. We did find some reference to
setting the object to NIL, but no indication that this was necessary for the
memory to be freed.
I believe the documentation, and probably some Tech-Notes, address the issue of
freeing memory.
Automatic memory management frees a memory object when no references to the
memory
object exist. Since references are the reason that a memory object lives,
removing
the references is the only way that memory objects can be freed. This is why the
manuals and Tech-Notes talk about setting references to NIL (I.E. freeing memory
in an automatic system is done by NILing references and not by calling freeing
routines.) This is not an absolute requirement (as you have probably noticed
that
most things are freed even without setting references to NIL) but it accelerates
the freeing of 'dead' objects and reduces the memory utilization because it
tends
to carry around less 'dead' objects.
It is my understanding that in this environment, the development tool
(Forte') claims to handle memory utilization and garbage collection for you.
If that is the case, then it is my opinion that it shoud be nearly
impossible for the developer to create memory-leakage problems without going
outside the tool and allocating the memory directly. If that is not the
case, then we should have destructor methods available to us so that we can
handle them correctly. I know when I am finished with an object, and I
would have no problem calling a "destroy" or "cleanup" method. In fact, I
would prefer that to just wondering if Forte' will take care of it for me.
It is actually quite easy to create memory leaks. Here are some examples:
Have a heap attribute in a service object. Keep inserting things into
the heap and never take them out (I.E. forgot to take them out). Since
service objects are always live, everything in the heap is also live.
Have an exception handler that catches exceptions and doesn't do
anything
with the error manager stack (I.E. it doesn't call task.ErrMgr.Clear).
If the handler is activated repeatedly in the same task, the stack of
exceptions will grow until you run out of memory or the task terminates
(task termination empties the error manager stack.)
It seems to me that this is a weakness in the tool that should be addressed.
Does anyone else have any opinions on this subject?
Actually, the implementation of the advanced features supported by the Forte
product
results in some complications in areas that can be hard to explain. Memory
management
happens to be one of the areas most effected. A precise explanation to a
non-deterministic process is not possible, but the following attempts to
explain the
source of the non-determinism.
o The ability to call from compiled C++ to interpreted TOOL and back
to compiled C++.
This single ability causes most of the strange effects mentioned in
this thread.
For C++ code the location of all variables local to a method is not
know
(I.E. C++ compilers can't tell you at run-time what is a variable
and what
isn't.) We use the pessimistic assumption that anything that looks
like a
reference to a memory object is a reference to a memory object. For
interpreted
TOOL code the interpreter has exact knowledge of what is a reference
and what
isn't. But the TOOL interpreter is itself a C++ method. This means
that any
any memory objects referenced by the interpreter during the
execution of TOOL
code could be stored in local variables in the interpreter. The TOOL
interpreter
runs until the TOOL code returns or the TOOL code calls into C++.
This means
that many levels of nested TOOL code can be the source of values
assigned to
local variables in the TOOL interpreter.
This is the complicated reason that answers the question: Why doesn't a
variable that is created and only used in a TOOL method that has
returned
get freed? It is likely that the variable is referenced by local
variables
in the TOOL interpreter method. This is also why setting the
variable to NIL
before returning doesn't seem to help. If the variable in question is a
Array than invoke Clear() on the Array seems to help, because even
though the
Array is still live the objects referenced by the Array have less
references.
The other common occurrence of this effect is in a TextData that
contains a
large string. In this case, invoking SetAllocatedSize(0) can be used
to NIL
the reference to the memory object that actually holds the sequence of
characters. Compositions of Arrays and TextData's (I.E. a Array of
TextData's
that all have large TextDatas.) can lead to even more problems.
When the TOOL code is turned into a compiled partition this effect
is not
noticed because the TOOL interpreter doesn't come into play and
things execute
the way most people expect. This is one area that we try to improve
upon, but it is complicated by the 15 different platforms, and thus
C++ compilers,
that we support. Changes that work on some machines behave
differently on other
machines. At this point in time, it occasionally still requires that
a TOOL
programmer actively address problems. Obviously we try to reduce
this need over
time.
o Automatic memory management for C++ with support for multi-processor
threads.
Supporting automatic memory management for C++ is something that is
not a very
common feature. It requires a coding standard that defines what is
acceptable and
what isn't. Additionally, supporting multi-processor threads adds
its own set of
complications. Luckily TOOL users are insulated from this because
the TOOL to C++
code generator knows the coding standard. In the end you are
impacted by the C++
compiler and possibly the differences that occur between different
compilers and/or
different processors (I.E. Intel X86 versus Alpha.) We have seen
applications that
had memory utilization differences of up to 2:1.
There are two primary sources of differences.
The first source is how compilers deal with dead assignments. The
typical TOOL
fragment that is being memory manager friendly might perform the
following:
temp : SomeObject = new;
... // Use someObject
temp = NIL;
return;
When this is translated to C++ it looks very similar in that temp
will be assigned the
value NULL. Most compilers are smart enough to notice that 'temp' is
never used again
because the method is going to return immediately. So they skip
setting 'temp' to NULL.
In this case it should be harmless that the statement was ignored
(see next example for a different variation.) In more
complicated examples that involve loops (especially long
lived event loops) a missed NIL assignment can lead to leaking the
memory object whose
reference didn't get set to NIL (incidentally this is the type of
problem that causes
the TOOL interpreter to leak references.)
The second source is a complicated interaction caused by history of
method invocations.
Consider the following:
Method A() invokes method B() which invokes method C().
Method C() allocates a temporary TextData, invokes
SetAllocatedSize(1000000)
does some more work and then returns.
Method B() returns.
Method A() now invokes method D().
Method D() allocates something that cause the memory manager to look
for memory objects to free.
Now, even though we have returned out of method C() we have starting
invoking
methods. This causes us to use re-use portions of the C++ stack used to
maintain the history of method invocation and space for local variables.
There is some probability that the reference to the 'temporary' TextData
will now be visible to the memory manager because it was not overwritten
by the invocation of D() or anything invoked by method D().
This example answers questions of the form: Why does setting a local
variable to
NIL and returning and then invoking task.Part.Os.RecoverMemory not
cause the
object referenced by the local variable to be freed?
In most cases these effects cause memory utilization to be slightly
higher
than expected (in well behaved cases it's less than 5%.) This is a small
price to pay for the advantages of automatic memory management.
An object-oriented programming style supported by automatic memory
management makes it
easy to extended existing objects or sets of objects by composition.
For example:
Method A() calls method B() to get the next record from the
database. Method B()
is used because we always get records, objects, of a certain
type from
method B() so that we can reuse code.
Method A() enters each row into a hash table so that it can
implement a cache
of the last N records seen.
Method A() returns the record to its caller.
With manual memory management there would have to be some interface
that allows
Method A() and/or the caller of A() to free the record. This
requires
that the programmer have a lot more knowledge about the
various projects
and classes that make up the application. If freeing doesn'
happen you
have a memory leak, if you free something while its still
being used the
results are unpredictable and most often fatal.
With automatic memory management, method A() can 'free' its
reference by removing
the reference from the hash table. The caller can 'free' its
reference by
either setting the reference to NIL or getting another
record and referring
to the new record instead of the old record.
Unfortunately, this convenience and power doesn't come for free. Consider
the following,
which comes from the Forte' run-time system:
A Window-class object is a very complex beast. It is composed of two
primary parts:
the UserWindow object which contains the variables declared by the
user, and the
Window object which contains the object representation of the window
created in
the window workshop. The UserWindow and the Window reference each
other. The Window
references the Menu and each Widget placed on the Window directly. A
compound Window
object, like a Panel, can also have objects place in itself. These
are typically
called the children. Each of the children also has to know the
identity of it's
Mom so they refer to there parent object. It should be reasonably
obvious that
starting from any object that make up the window any other object
can be found.
This means that if the memory manager finds a reference to any
object in the Window
it can also find all other objects in the window. Now if a reference
to any object
in the Window can be found on the program stack, all objects in the
window can
also be found. Since there are so many objects and the work involved
in displaying
a window can be very complicated (I.E. the automatic geometry
management that
layouts the window when it is first opened or resized.) there are
potentially many
different reference that would cause the same problem. This leads to
a higher than
normal probability that a reference exists that can cause the whole
set of Window
objects to not be freed.
We solved this problem in the following fashion:
Added a new Method called RecycleMemory() on UserWindow.
Documented that when a window is not going to be used again
that it is
preferably that RecycleMemory() is invoked instead
of Close().
The RecycleMemory() method basically sets all references
from parent to
child to NIL and sets all references from child to
parent to NIL.
Thus all objects are isolated from other objects
that make up
the window.
Changed a few methods on UserWindow, like Open(), to check
if the caller
is trying to open a recycled window and throw an
exception.
This was feasible because the code to traverse the parent/child
relationship
ready existed and was being used at close time to perform other
bookkeeping
operations on each of the Widgets.
To summarize:
Automatic memory management is less error prone and more productive but
doesn't come totally for free.
There are things that the programmer can do that assists the memory
manager:
o Set object reference to NIL when known to be correct (this
is the
way the memory is deallocated in an automatic system.)
o Use methods like Clear() on Array and SetAllocatedSize()
on TextData to
that allow these objects to set their internal
references to NIL
when known to be correct.
o Use the RecycleMemory() method on windows, especially very
complicated
windows.
o Build similar type of methods into your own objects when
needed.
o If you build highly connected structures that are very
large in the
number of object involved think that how it might be
broken
apart gracefully (it defeats some of the purpose of
automatic
management to go to great lengths to deal with the
problem.)
o Since program stacks are the source of the 'noise'
references, try
and do things with less tasks (this was one of the
reasons that
we implemented event handlers so that a single task
can control
many different windows.)
Even after doing all this its easy to still have a problem.
Internally we have
access to special tools that can help point at the problem so that
it can be
solved. We are attempting to give users UNSUPPORTED access to these
tools for
Release 3. This should allow users to more easily diagnose problems.
It also
tends to enlighten one about how things are structured and/or point out
inconsistencies that are the source of known/unknown bugs.
Derek
Derek Frankforth [email protected]
Forte Software Inc. [email protected]
1800 Harrison St. +510.869.3407
Oakland CA, 94612I beleive he means to reformat it like a floppy disk.
Go into My Computer, Locate the drive letter associated with your iPod(normally says iPod in it, and shows under removable storage).
Right click on it and choose format - make sure to not have the "quick format" option checked. Then let it format.
If that doesnt work, There are steps somewhere in the 5th gen forum( dont have the link off hand) to try to use the usbstor.sys to update the USB drivers for the Nano/5th gen. -
Thanks to all who responded to my question about memory utilization. There
were some good suggestions that I will follow up on. I am very grateful for
the help.
As a general rule, I would agree that memory utilzation problems tend to be
developer-induced. I believe that is generally true for most development
environments. However, this developer was having a little trouble finding
out how NOT to induce them. After scouring the documentation for any
references to object destructors, or clearing memory, or garbage collection,
or freeing objects, or anything else we could think of, all we found was how
to clear the rows from an Array object. We did find some reference to
setting the object to NIL, but no indication that this was necessary for the
memory to be freed.
It is my understanding that in this environment, the development tool
(Forte') claims to handle memory utilization and garbage collection for you.
If that is the case, then it is my opinion that it shoud be nearly
impossible for the developer to create memory-leakage problems without going
outside the tool and allocating the memory directly. If that is not the
case, then we should have destructor methods available to us so that we can
handle them correctly. I know when I am finished with an object, and I
would have no problem calling a "destroy" or "cleanup" method. In fact, I
would prefer that to just wondering if Forte' will take care of it for me.
It seems to me that this is a weakness in the tool that should be addressed.
Does anyone else have any opinions on this subject?Index rebuild = Drop and recreate, this complete recreated index will be in the memory till completion of the full operation.
The lazy writer process periodically checks the available free space in the buffer cache between two checkpoints. If a dirty data page (a page read and/or modified) in the buffer hasn’t been used for a while, the lazy writer flushes it to disk and then marks
as free in the buffer cache
If SQL Server needs more memory and the buffer cache size is below the value set as the Maximum server memory parameter for the SQL Server instance, the lazy writer will take more memory
If SQL Server is under memory pressure, the lazy writer will be busy trying to free enough internal memory pages and will be flushing the pages extensively. The intensive lazy writer activity affects other resources by causing additional physical disk I/O activity
and using more CPU resources
To provide enough free space in the buffer, pages are moved from the buffer to disk. These pages are usually moved at a check point, which can be:
automatic (occurs automatically to meet the recovery interval request)
indirect (occurs automatically to meet the database target recovery time)
manual (occurs when the CHECKPOINT command is executed)
internal (occurs along with some server-level operations, such as backup creation)
At a checkpoint, all dirty pages are flushed to disk and the page in the buffer cache is marked for overwriting
“For performance reasons, the Database Engine performs modifications to database pages in memory—in the buffer cache—and does not write these pages to disk after every change. Rather, the Database Engine periodically issues a checkpoint on each database. A
checkpoint writes the current in-memory modified pages (known as dirty pages) and transaction log information from memory to disk and, also, records information about the transaction log.”
Raju Rasagounder Sr MSSQL DBA
Maybe you are looking for
-
Transfering i tunes & music to another hard drive
I would like to move all my music & iTunes to a new hard drive (external) and have all my new music load on that drive and not on the C drive.
-
Inventory Management and WBS?
Hi there, We want to show an inventory management report which contains WBS Hierarchy. It shall look like the following style: Dimension Fact Stock Value FKSTL ->PROJECT ->WBS ->MATNR xxx,xxx,xxx I have activate two InfoCubes, 0IC_C03 and 0
-
Satellite T230 and Windows 8 compatibility
Hello, My Satellite T230 (PST4AE) is not listed on Windows 8 compatibility's page. Is it not compatible at all?
-
TEMPLATE CHANGE OF BUSINESS BLUEPRINT-SOLAR01
Dear Friends, Our requiremnet is to replace the Standard business blue print template with our project specific template. I did the settings as below. SAP Solution Manager Implementation Guide->SAP Solution Manager->Scenario-Specific Settings->Implem
-
Zen Stone Plus: Does it start playing again where i stopped the so
I hope one og you can help me. I want to know if the Zen stone Plus MP3 player starts in the same song after it has been turned off. So if I turn it off 3.4min in a song, does it then start at 3.4 when i turn it on again, or does it start from the be