In terms of memory utilization which is better to iterate through a LIST?
For Loop or Iterator Class.?????
1. For loop and Iterator are not mutually exclusive. One common way to iterate before the foreach loop introduced in 1.5 (my preferred way) was like so:
for (Iterator iter = list.iterator(); iter.hasNext();) {
Object obj = iter.next();
}2. By "for loop" I assume you mean "using get()". NEVER iterate using get(). It will work fine on ArrayList, but will be crappy slow on LinkedList, and doesn't exist on Set or Collection. Using an Iterator (or foreach, which is syntactic sugar for an Iterator) means you'll get proper and consistent behavior on any colleciton.
3. The memory usage will not be any different, or will be ridiculously insignificant. This kind of microoptimization without hardcore profiling numbers is a good way to gain a tiny, meaningless bit in one area at a much larger cost in another area.
Just use a foreach loop, or in cases where you need to modify the collection under iteration, an explicit Iterator or ListIterator.
Similar Messages
-
Which is better storing string values in Map or String buffer
Hi,
I have a store a 10 string values in a cookie. Do i use a String buffer and append all values or put it in a hash map.
If i put in a string buffer i have to use a string tokenizer to loop and extract the values., but if i am using a map then
retrieval will be easier, but in terms of memory management, which is better. as i have to create this cookie for every unique IP hitting my site.
Thanks,
Viiveekviiveek wrote:
I have a store a 10 string values in a cookie. Do i use a String buffer and append all values or put it in a hash map.
If i put in a string buffer i have to use a string tokenizer to loop and extract the values., but if i am using a map then
retrieval will be easier, but in terms of memory management, which is better. as i have to create this cookie for every unique IP hitting my site. In terms of memory management, StringBuffer could potentially be better as there is no need to keep key objects in memory.
That doesn't make it a good idea. The bytes of memory you'd lose by using a Map would be made up by the fact that Map was expressly made for storing key/value pairs. Memory management should be about the 200th factor you should consider. -
Cisco Router Memory Utilization
Hi,
We have a Cisco SA520 Router (Firmware 2.1.18)
We are only using this for about 1 month now. Router seems ok its just
I am worried about the Memory utilization which reach to 62% (144/234 MB)
Is this something to worry about?
How can I utilize this by lowering down the usage?
Pardon me I am just to new Cisco devices.
Many Thanks.
ACAC,
Please go ahead and upgrade to the latest firmware 2.1.51 Memory utilization shouldn't be a problem. After the upgrade please keep an eye on the memory and report back.
Thanks,
Jasbryan
Cisco Support Engineer
.:|:.:|:. -
How to get the Memory Utilization Data for Cloud Service
Hi,
We are planning to monitor the Performance of Cloud Services hosted onto Azure through VisualStudioOnline [TFS]. However, I couldn't find any performance metric for Memory utilization on individual Cloud Service.
So please help how can we monitor Memory utilization on individual Cloud Service hosted through VSO.
Thanks.
Regards,
Subhash Konduru
Please remember to mark the replies as answers if they help and unmark them if they provide no help.If you are using the VSO then you can take a look at azure application insights which is a service hosted on azure which will help you to detect issues, solve problems and continuously improve your web applications.
Read more about Application insights here -
http://azure.microsoft.com/en-us/documentation/articles/app-insights-get-started/
https://msdn.microsoft.com/en-us/library/dn793604.aspx
Bhushan | Blog |
LinkedIn | Twitter -
Coding Preference ..Which is better for memory?
Hey all,
Javas garbage collection is sweet. However, I was reading somewhere that setting some objects to null after I'm done with them will actually help.
(help what .. I'm not sure.. my guess is memory used by the JVM)
Thus I have two ways to do the same thing and I'd like to hear peoples comments on which is "better" ... or will yield faster performance.
Task: I have a Vector of Strings (called paths) that hold absolute file paths. (Don't ask why I didn't use a String[]) I'd like to check and see if they exist, and if not, create them... I'll use the createNewFile() method for that.
Method A -- Here I'll reuse that File object
public void myMethod()throws Exception{
File file = null;
for(int i = 0; i < paths.size(); i++){
file = new File(paths.get(i).toString());
boolean made = file.createNewFile();
if(made){doSomething();}
file = null;
}Method B -- Here I'll use um... "dynamically made" ones that I won't eventually be set back to null
public void myMethod()throws Exception{
for(int i = 0; i < paths.size(); i++){
boolean made = (new File(paths.get(i).toString())).createNewFile();
if(made){doSomething();}
}So when the code eventually exists myMethod, the object "file" will be out of scope and trashed.... correct? If thats the case, then would there be any other differences between the two implementations?
ThanksThere's no real difference between the two. Choose the style you prefer,
although in the first one I'd lose the "file = null" statement since that
variable is about to disappear, and I'm move the definition into the loop
-- always give variables as small a scope as possible, mainly to
keep the logic simple:
public void myMethod()throws Exception{
for(int i = 0; i < paths.size(); i++){
File file= new File(paths.get(i).toString());
boolean made = file.createNewFile();
if(made){doSomething();}
} -
Need to know which is better and faster for laptop memory upgrade
What is the difference and which is better for a laptop memory upgrade: SDRAM, DDR SDRAM, SIMM? One site suggested DDR SDRAM and BB suggested SO DIMM
SODIMM is the physical form factor of the module, DDR SDRAM is the actual memory type.
SIMMs are an ANCIENT form of memory module. If you have a system that uses SIMMS, you will spend less money throwing your computer in the trash and buying a new one.
Non-DDR SDRAM is also ancient.
Each system can only use one type of memory, i.e. you can't use DDR memory in a system that only used non-DDR memory, and you can't use DDR2 memory in a DDR system.
To find out if the configuration is 2x256 or 1x512, you'll need to look at what is installed.
*disclaimer* I am not now, nor have I ever been, an employee of Best Buy, Geek Squad, nor of any of their affiliate, parent, or subsidiary companies. -
Which is better in term of power consumption for L...
Hello!
I would like to know in terms of power consumption which connection type should I use on my Lumia 1020. 4G LTE or 3G. I'm not a person who watches movies on his smartphone, and I don't browse the Internet intensively using my smartphone. I use it to take pictures, check my e-mails, access Facebook, Twitter, Instagram and Flickr.
Thank you.
Cosmin Petrenciuc3G is in general less power hungry.
-
Which is better in terms of performance
Dear All,
which is better..
to use FOR ALL ENTRIES or
to build a range and use WHERE IN RANGE_TABLE .. Does this have data limitation problem.
is there a better method?
Thanks,
Raghavendra
Moderator message - Please search before asking - post locked
Edited by: Rob Burbank on Jul 7, 2009 10:53 AMI want to know which is betterThere's not enough information for anyone here to be able to tell you.
Obviously the first one "looks" faster, but without knowing the tables, structure, data, indexes, platform etc. etc. etc. we won't have a clue. -
Which is better? ArrayList or LinkedList
Do you know which one is better between ArrayList and LinkdedList in terms of performance, speed and capacity?
Which one do you suggest to use ?
ThanksIt depends upon how the list is going to be used. ArrayLists and LinkedLists work differently -- you need to think about how they each store their data.
ArrayLists store their list items in, well, arrays. This makes them very fast at addressing those items by index #. So any implementation that needs a lot of random access to the list, such as sorting, is going to be relatively fast.
The downside of storing the list in an array presents itself when it comes time to add more items to the list. When it runs out of space in the array, it must create a new larger array and copy the items over to it. Also, if you need to insert or remove an item anywhere other than the end of the list, ArrayList must shift the subsequent items in the list by doing an array copy.
This can be a real drag if you're implementing a queue. This is where LinkedList shines. Each item in the list points to the next and previous ones in the list. Inserting, appending or removing list items involves a couple simple assignment statements. No reallocations or large memory copies are involved. Access is easy as long as it is sequential.
Random access in a linked list is problematic however. In order to get to the Nth item in the list, LinkedList must start with the first item in the list and step through the list N-1 times. An order of magnitude slower than using an ArrayList. -
Queue or array which is better?
I need a array of clusters to be stored for which the length is not defined. I will update, Add new element to it.
Array or queue can be used to store the clusters. Which one would be better to use in terms of memory usage, fast execution and other performance parameters.
I have some other doubts also.
Consider an array of 8 elements. When an new element is added to the array using 'insert into array',
whether a new copy of 9 elements will be created? or 9th element will be linked after the 8th element (like linked lists)? or something else happens?
If a new copy is created, what happens to old 8 elements in the memory, whether that memory will be freed or kept as such?
The same doubt in case of queue also...
Thanks in advance..
Solved!
Go to Solution.In your case, you want to use a queue.
An array is stored in RAM in consecutive memory locations. When increasing the size of the array, the data structure is increased in size and often entirely moved to a place where it can all fit. If you are resizing an array inside a fairly fast loop, the performance hit would be noticeable.
A Queue is able to place individual elements in their own address chunks in RAM and is much more performance-friendly.
- Cheers, Ed -
Which is better to install Oracle 11g database based on ASM or Filesystem
We will install 2 sets of Oracle 11.2.0.3 on Redhat Linix 5.6 and configure Data Guard for them further -- one will be a primary DB server, the other will be a physical standby DB server. The Oracle DB stoage is based on SAN Array Disk with 6TB size. Now there are two options to manage the DB datafiles:
1. Install Oracle ASM
2. Create the tranditional OS filesystem
Which is better? in the past, our 10g data guard environment is not based on Oracle ASM.
Someone think if we adopt the oracle ASM, the shortcomings are :
1. as there is one more instance that will consume more memory and resource.
2. as the ASM file system cannot be shown out on the OS level directly such as "df" command, the disk utilization monitor job will be more difficult. at least it cannot be supervised at OS level.
3. as the DB bshoule be done the daily incremental backup (Mon-Sat) to Local Backup Drive. the bakup job must be done by RMAN rather than user-managed script.
Who can provide some advices? Thanks very much in advance.user5969983 wrote:
We will install 2 sets of Oracle 11.2.0.3 on Redhat Linix 5.6 and configure Data Guard for them further -- one will be a primary DB server, the other will be a physical standby DB server. The Oracle DB stoage is based on SAN Array Disk with 6TB size. Now there are two options to manage the DB datafiles:
1. Install Oracle ASM
2. Create the tranditional OS filesystem
Which is better? in the past, our 10g data guard environment is not based on Oracle ASM. ASM provides a host of new features ito data management, and performance - to the extent that you can rip out the entire existing storage system, replace it with a brand new storage system, without a single second of database downtime.
Someone think if we adopt the oracle ASM, the shortcomings are :
1. as there is one more instance that will consume more memory and resource.Not really relevant on 64bit h/w architecture that removes limitations such a 4GB of addressable memory. On the CPU side... heck, my game PC at home has a 8 core 64bit CPU. Single die and dual core CPUs belong to the distant past.
Arguing that an ASM instance has overheads would be silly. And totally ignores the wide range of real and tangible benefits that ASM provides.
2. as the ASM file system cannot be shown out on the OS level directly such as "df" command, the disk utilization monitor job will be more difficult. at least it cannot be supervised at OS level.That is a A Very Good Thing (tm). Managing database storage from o/s level is flawed in many ways.
3. as the DB bshoule be done the daily incremental backup (Mon-Sat) to Local Backup Drive. the bakup job must be done by RMAN rather than user-managed script.
rman supports ASM fully.
I have stopped using cooked file systems for Oracle - I prefer ASM first and foremost. The only exceptions are tiny servers with a single root disk that needs to be used for kernel, database s/w, and database datafiles. (currently these are mostly Oracle XE systems in my case, and configured that way as XE does not support ASM and is used as a pure cost decision). -
Droplet or Simple Nucleus component which is better ?
Hi,
1)Droplet or Simple Nucleus Component which is better as per memory utilization (performence wise).
2)extending one Droplet in another droplet is recomended or injecting droplet which is recomended ?
Please clear these issues if any body ASAP.
thanksHi,
Droplets are intended to connect front end (jsps) with the business functionality thro nucleus components. They are primarily used for presentation logic which involves business rules.
So, you need to decide to go for a mere nucleus component or droplet based on your requirement.
It is good to have any business logic / common code in a tools class and call that method from the droplet. In this case, you do not need to extend other droplet and can reuse the code from the tools class by injecting the tools component.
Please let me know if this helps. Or else, please specify the requirements more specifically.
Hope this helps.
Keep posting the updates.
Thanks,
Gopinath Ramasamy -
Want to create a model for effective memory utilization with faster access
Can someone help me I am looking for a solution to a problem. Problem description is as follows:
We have a data model like:
name
City
Address
Zipcode
1. we have a huge numbesr(millions) of such objects availiable in the memory.How can i make a good design for the better memory utilization.
Means in which structures data should be stored in a memory to make effective memory utilizaion.We already have data structures like hashmap
,hashtable but beyond that can we use them or use other data structures in such a way that memory utilized by these objects is minimal.
2. design should be created keeping in mind that we can apply filters on any of the model attributes.(like if we want to see data of those
objects only where city name is newyork) so filtering done on the data should be fast.Perhaps you're trying to solve the wrong problem? If the true objective is "to retrieve data as quickly as possible," perhaps you should investigate a database rather than trying to squeeze things into the smallest possible memory footprint? You'd have to have some pretty hefty hardware to keep "millions" of records in memory in addition to applications, server, OS, IP stack, etc.
But only people closest to the application can make that assessment. Just offering it as an possible alternative to consider. -
Memory Utilization during XML Parsing - Response time is high
Slow response time while xml parsing is done.
Description of the problem:
During XML parsing, memory is used and discarded so frequently that garbage collection
is occurring multiple times per minute, impacting performance. In order to better
understand the source of the memory usage issue, we used JProbe Memory Debugger.
JProbe Memory Debugger was run in Aggregate mode in order to determine which classes
were using the most total or aggregate memory (the sum of the memory required
to instantiate not just a given object, but all the objects it uses.) The result
was that weblogic.apache. xerces.impl.xs.dom.DocumentImpl and weblogic.apache.xerces.jaxp
comprise 23.8% and 15.4%, respectively, of total memory on a heap of 121MB. In
additional tests, the larger the heap, the greater these percentages were.
This results in slow response time.
The following are the details of software and Hardware configurations used:
Server: weblogic 8.1
OS: Solaris 8
System Configuration: Sun Microsystems sun4u Sun Fire 6800
System clock frequency: 150 MHz
Memory size: 8192 Megabytes"Kris" <[email protected]> wrote in message news:40f2fcda$1@mktnews1...
Sorry, I overlooked it.
yes we do have 8 GB RAM. And as far as xml usage, we are parsing the xml to DOM
(including validation) and then applying transformation. But its the parsing stuff
which is eating the memory.1. Can you run JProbe to find out real CPU utilization/bottlenecks?
2. Apache Xerses implementation that is used in weblogic has a design
flaw that results in serialization of memory allocation by the transformer,
that makes it impossible to use for intense multithreaded transformations.
Consider using other transformers.
Regards,
Slava Imeshev
>
>
"Slava Imeshev" <[email protected]> wrote:
Please answer my questions.
Regards,
Slava Imeshev
"Krisna" <[email protected]> wrote in message news:40f299ae$1@mktnews1...
Thanks Slava for youe response. Coming back to response time, thisprocess is part
of a big task. So i cant really tell what response time i can allocatejust for
this piece alone. Might be, roughly it should be less than 0.4 seconds.what the
major concenr is the memory utilization by these packages. So whatmakes it to
use this kind of memory and whether its a known issue ?
"Slava Imeshev" <[email protected]> wrote:
"kris" <[email protected]> wrote in message news:40eaddce$1@mktnews1...
Slow response time while xml parsing is done.
Description of the problem:
During XML parsing, memory is used and discarded so frequently thatgarbage collection
is occurring multiple times per minute, impacting performance. In
order
to better
understand the source of the memory usage issue, we used JProbe
Memory
Debugger.
JProbe Memory Debugger was run in Aggregate mode in order to determinewhich classes
were using the most total or aggregate memory (the sum of the memoryrequired
to instantiate not just a given object, but all the objects it uses.)The result
was that weblogic.apache. xerces.impl.xs.dom.DocumentImpl and weblogic.apache.xerces.jaxp
comprise 23.8% and 15.4%, respectively, of total memory on a heap
of
121MB. In
additional tests, the larger the heap, the greater these percentageswere.
Large heap means longer garbage collections. Anyway, DOM is very heavy
on memory and you can not escape it. What's is your usage patternfor
XML
processing? Do you use XSL?
This results in slow response time.What do you consider as acceptable/inacceptable responce time?
The following are the details of software and Hardware configurationsused:
Server: weblogic 8.1
OS: Solaris 8
System Configuration: Sun Microsystems sun4u Sun Fire 6800
System clock frequency: 150 MHz
Memory size: 8192 MegabytesDoes this mean you got 8GB RAM on 150Mhz box?
Regards,
Slava Imeshev -
Follow up on an old thread about memory utilization
This thread was active a few months ago, unfortunately its taken me until now
for me to have enough spare time to craft a response.
From: SMTP%"[email protected]" 3-SEP-1996 16:52:00.72
To: [email protected]
CC:
Subj: Re: memory utilization
As a general rule, I would agree that memory utilzation problems tend to be
developer-induced. I believe that is generally true for most development
environments. However, this developer was having a little trouble finding
out how NOT to induce them. After scouring the documentation for any
references to object destructors, or clearing memory, or garbage collection,
or freeing objects, or anything else we could think of, all we found was how
to clear the rows from an Array object. We did find some reference to
setting the object to NIL, but no indication that this was necessary for the
memory to be freed.
I believe the documentation, and probably some Tech-Notes, address the issue of
freeing memory.
Automatic memory management frees a memory object when no references to the
memory
object exist. Since references are the reason that a memory object lives,
removing
the references is the only way that memory objects can be freed. This is why the
manuals and Tech-Notes talk about setting references to NIL (I.E. freeing memory
in an automatic system is done by NILing references and not by calling freeing
routines.) This is not an absolute requirement (as you have probably noticed
that
most things are freed even without setting references to NIL) but it accelerates
the freeing of 'dead' objects and reduces the memory utilization because it
tends
to carry around less 'dead' objects.
It is my understanding that in this environment, the development tool
(Forte') claims to handle memory utilization and garbage collection for you.
If that is the case, then it is my opinion that it shoud be nearly
impossible for the developer to create memory-leakage problems without going
outside the tool and allocating the memory directly. If that is not the
case, then we should have destructor methods available to us so that we can
handle them correctly. I know when I am finished with an object, and I
would have no problem calling a "destroy" or "cleanup" method. In fact, I
would prefer that to just wondering if Forte' will take care of it for me.
It is actually quite easy to create memory leaks. Here are some examples:
Have a heap attribute in a service object. Keep inserting things into
the heap and never take them out (I.E. forgot to take them out). Since
service objects are always live, everything in the heap is also live.
Have an exception handler that catches exceptions and doesn't do
anything
with the error manager stack (I.E. it doesn't call task.ErrMgr.Clear).
If the handler is activated repeatedly in the same task, the stack of
exceptions will grow until you run out of memory or the task terminates
(task termination empties the error manager stack.)
It seems to me that this is a weakness in the tool that should be addressed.
Does anyone else have any opinions on this subject?
Actually, the implementation of the advanced features supported by the Forte
product
results in some complications in areas that can be hard to explain. Memory
management
happens to be one of the areas most effected. A precise explanation to a
non-deterministic process is not possible, but the following attempts to
explain the
source of the non-determinism.
o The ability to call from compiled C++ to interpreted TOOL and back
to compiled C++.
This single ability causes most of the strange effects mentioned in
this thread.
For C++ code the location of all variables local to a method is not
know
(I.E. C++ compilers can't tell you at run-time what is a variable
and what
isn't.) We use the pessimistic assumption that anything that looks
like a
reference to a memory object is a reference to a memory object. For
interpreted
TOOL code the interpreter has exact knowledge of what is a reference
and what
isn't. But the TOOL interpreter is itself a C++ method. This means
that any
any memory objects referenced by the interpreter during the
execution of TOOL
code could be stored in local variables in the interpreter. The TOOL
interpreter
runs until the TOOL code returns or the TOOL code calls into C++.
This means
that many levels of nested TOOL code can be the source of values
assigned to
local variables in the TOOL interpreter.
This is the complicated reason that answers the question: Why doesn't a
variable that is created and only used in a TOOL method that has
returned
get freed? It is likely that the variable is referenced by local
variables
in the TOOL interpreter method. This is also why setting the
variable to NIL
before returning doesn't seem to help. If the variable in question is a
Array than invoke Clear() on the Array seems to help, because even
though the
Array is still live the objects referenced by the Array have less
references.
The other common occurrence of this effect is in a TextData that
contains a
large string. In this case, invoking SetAllocatedSize(0) can be used
to NIL
the reference to the memory object that actually holds the sequence of
characters. Compositions of Arrays and TextData's (I.E. a Array of
TextData's
that all have large TextDatas.) can lead to even more problems.
When the TOOL code is turned into a compiled partition this effect
is not
noticed because the TOOL interpreter doesn't come into play and
things execute
the way most people expect. This is one area that we try to improve
upon, but it is complicated by the 15 different platforms, and thus
C++ compilers,
that we support. Changes that work on some machines behave
differently on other
machines. At this point in time, it occasionally still requires that
a TOOL
programmer actively address problems. Obviously we try to reduce
this need over
time.
o Automatic memory management for C++ with support for multi-processor
threads.
Supporting automatic memory management for C++ is something that is
not a very
common feature. It requires a coding standard that defines what is
acceptable and
what isn't. Additionally, supporting multi-processor threads adds
its own set of
complications. Luckily TOOL users are insulated from this because
the TOOL to C++
code generator knows the coding standard. In the end you are
impacted by the C++
compiler and possibly the differences that occur between different
compilers and/or
different processors (I.E. Intel X86 versus Alpha.) We have seen
applications that
had memory utilization differences of up to 2:1.
There are two primary sources of differences.
The first source is how compilers deal with dead assignments. The
typical TOOL
fragment that is being memory manager friendly might perform the
following:
temp : SomeObject = new;
... // Use someObject
temp = NIL;
return;
When this is translated to C++ it looks very similar in that temp
will be assigned the
value NULL. Most compilers are smart enough to notice that 'temp' is
never used again
because the method is going to return immediately. So they skip
setting 'temp' to NULL.
In this case it should be harmless that the statement was ignored
(see next example for a different variation.) In more
complicated examples that involve loops (especially long
lived event loops) a missed NIL assignment can lead to leaking the
memory object whose
reference didn't get set to NIL (incidentally this is the type of
problem that causes
the TOOL interpreter to leak references.)
The second source is a complicated interaction caused by history of
method invocations.
Consider the following:
Method A() invokes method B() which invokes method C().
Method C() allocates a temporary TextData, invokes
SetAllocatedSize(1000000)
does some more work and then returns.
Method B() returns.
Method A() now invokes method D().
Method D() allocates something that cause the memory manager to look
for memory objects to free.
Now, even though we have returned out of method C() we have starting
invoking
methods. This causes us to use re-use portions of the C++ stack used to
maintain the history of method invocation and space for local variables.
There is some probability that the reference to the 'temporary' TextData
will now be visible to the memory manager because it was not overwritten
by the invocation of D() or anything invoked by method D().
This example answers questions of the form: Why does setting a local
variable to
NIL and returning and then invoking task.Part.Os.RecoverMemory not
cause the
object referenced by the local variable to be freed?
In most cases these effects cause memory utilization to be slightly
higher
than expected (in well behaved cases it's less than 5%.) This is a small
price to pay for the advantages of automatic memory management.
An object-oriented programming style supported by automatic memory
management makes it
easy to extended existing objects or sets of objects by composition.
For example:
Method A() calls method B() to get the next record from the
database. Method B()
is used because we always get records, objects, of a certain
type from
method B() so that we can reuse code.
Method A() enters each row into a hash table so that it can
implement a cache
of the last N records seen.
Method A() returns the record to its caller.
With manual memory management there would have to be some interface
that allows
Method A() and/or the caller of A() to free the record. This
requires
that the programmer have a lot more knowledge about the
various projects
and classes that make up the application. If freeing doesn'
happen you
have a memory leak, if you free something while its still
being used the
results are unpredictable and most often fatal.
With automatic memory management, method A() can 'free' its
reference by removing
the reference from the hash table. The caller can 'free' its
reference by
either setting the reference to NIL or getting another
record and referring
to the new record instead of the old record.
Unfortunately, this convenience and power doesn't come for free. Consider
the following,
which comes from the Forte' run-time system:
A Window-class object is a very complex beast. It is composed of two
primary parts:
the UserWindow object which contains the variables declared by the
user, and the
Window object which contains the object representation of the window
created in
the window workshop. The UserWindow and the Window reference each
other. The Window
references the Menu and each Widget placed on the Window directly. A
compound Window
object, like a Panel, can also have objects place in itself. These
are typically
called the children. Each of the children also has to know the
identity of it's
Mom so they refer to there parent object. It should be reasonably
obvious that
starting from any object that make up the window any other object
can be found.
This means that if the memory manager finds a reference to any
object in the Window
it can also find all other objects in the window. Now if a reference
to any object
in the Window can be found on the program stack, all objects in the
window can
also be found. Since there are so many objects and the work involved
in displaying
a window can be very complicated (I.E. the automatic geometry
management that
layouts the window when it is first opened or resized.) there are
potentially many
different reference that would cause the same problem. This leads to
a higher than
normal probability that a reference exists that can cause the whole
set of Window
objects to not be freed.
We solved this problem in the following fashion:
Added a new Method called RecycleMemory() on UserWindow.
Documented that when a window is not going to be used again
that it is
preferably that RecycleMemory() is invoked instead
of Close().
The RecycleMemory() method basically sets all references
from parent to
child to NIL and sets all references from child to
parent to NIL.
Thus all objects are isolated from other objects
that make up
the window.
Changed a few methods on UserWindow, like Open(), to check
if the caller
is trying to open a recycled window and throw an
exception.
This was feasible because the code to traverse the parent/child
relationship
ready existed and was being used at close time to perform other
bookkeeping
operations on each of the Widgets.
To summarize:
Automatic memory management is less error prone and more productive but
doesn't come totally for free.
There are things that the programmer can do that assists the memory
manager:
o Set object reference to NIL when known to be correct (this
is the
way the memory is deallocated in an automatic system.)
o Use methods like Clear() on Array and SetAllocatedSize()
on TextData to
that allow these objects to set their internal
references to NIL
when known to be correct.
o Use the RecycleMemory() method on windows, especially very
complicated
windows.
o Build similar type of methods into your own objects when
needed.
o If you build highly connected structures that are very
large in the
number of object involved think that how it might be
broken
apart gracefully (it defeats some of the purpose of
automatic
management to go to great lengths to deal with the
problem.)
o Since program stacks are the source of the 'noise'
references, try
and do things with less tasks (this was one of the
reasons that
we implemented event handlers so that a single task
can control
many different windows.)
Even after doing all this its easy to still have a problem.
Internally we have
access to special tools that can help point at the problem so that
it can be
solved. We are attempting to give users UNSUPPORTED access to these
tools for
Release 3. This should allow users to more easily diagnose problems.
It also
tends to enlighten one about how things are structured and/or point out
inconsistencies that are the source of known/unknown bugs.
Derek
Derek Frankforth [email protected]
Forte Software Inc. [email protected]
1800 Harrison St. +510.869.3407
Oakland CA, 94612I beleive he means to reformat it like a floppy disk.
Go into My Computer, Locate the drive letter associated with your iPod(normally says iPod in it, and shows under removable storage).
Right click on it and choose format - make sure to not have the "quick format" option checked. Then let it format.
If that doesnt work, There are steps somewhere in the 5th gen forum( dont have the link off hand) to try to use the usbstor.sys to update the USB drivers for the Nano/5th gen.
Maybe you are looking for
-
Hi GUYS I am working for a Garment Industry and the audit is in next month end. I need your advice in how to catch up with the things related to Stock for Audit. We have 3-4 locations in System Like FABRIC STORE CUTTING STORE STICHING STORE FINISHING
-
I have a doubt in the thread mechanism..... Below is a sample of the class public class SimpleThread extends Thread { private int countDown = 5; private static int threadCount = 0; public SimpleThread() { super("" + ++threadCount); // Store the threa
-
What program in SAP SD automatically creates Purchase requisition
What program in SAP SD automatically creates Purchase requisition, and also please tell me the user exit that we can use to change the delivery address automatically from plant address to ship-to address during purchase requisition creation time... Y
-
CheckDB failed in CRM 4.0 and SQL2005 SP1
Hello all, I have a CRM 4.0 an SQL2005 SP1 - The kernel patch 640 is 221 - BASIS and ABA level is 60 When I execute a CheckDB by DB13 transaction the next error appear: "Could not find stored procedure 'dbc.sap_new_dbcheck'. [SQLSTATE 42000] (Error 2
-
Using Input Subsets in Textfields
Hi there, I was looking for a way to know which input subsets are available for Sony Ericsson, because Nokia accepts the IS_FULLWIDTH_DIGITS but Sony E. don't. Is there a way to get all the IS from a cellphone?