TCAM Utilization
Hello everyone!
I have noticed on my 3750 stack these things:
sh platform tcam utilization
CAM Utilization for ASIC# 0 Max Used
Masks/Values Masks/values
Unicast mac addresses: 528/4224 508/3998
IPv4 IGMP groups + multicast routes: 144/1152 11/47
IPv4 unicast directly-connected routes: 528/4224 508/3998
IPv4 unicast indirectly-connected routes: 272/2176 64/451
IPv4 policy based routing aces: 512/512 2/2
IPv4 qos aces: 528/528 82/82
IPv4 security aces: 1024/2048 27/27
# of HL3U fibs 2229
# of HL3U adjs 1831
# of HL3U mpaths 2
# of HL3U covering-fibs 1
# of HL3U fibs with adj failures 4
Fibs of Prefix length 0, with TCAM fails: 0
Fibs of Prefix length 1, with TCAM fails: 0
Fibs of Prefix length 2, with TCAM fails: 0
Fibs of Prefix length 3, with TCAM fails: 0
Fibs of Prefix length 4, with TCAM fails: 0
Fibs of Prefix length 5, with TCAM fails: 0
Fibs of Prefix length 6, with TCAM fails: 0
Fibs of Prefix length 7, with TCAM fails: 0
Fibs of Prefix length 8, with TCAM fails: 0
Fibs of Prefix length 9, with TCAM fails: 0
Fibs of Prefix length 10, with TCAM fails: 0
Fibs of Prefix length 11, with TCAM fails: 0
Fibs of Prefix length 12, with TCAM fails: 0
Fibs of Prefix length 13, with TCAM fails: 0
Fibs of Prefix length 14, with TCAM fails: 0
Fibs of Prefix length 15, with TCAM fails: 0
Fibs of Prefix length 16, with TCAM fails: 0
Fibs of Prefix length 17, with TCAM fails: 0
Fibs of Prefix length 18, with TCAM fails: 0
Fibs of Prefix length 19, with TCAM fails: 0
Fibs of Prefix length 20, with TCAM fails: 0
Fibs of Prefix length 21, with TCAM fails: 0
Fibs of Prefix length 22, with TCAM fails: 0
Fibs of Prefix length 23, with TCAM fails: 0
Fibs of Prefix length 24, with TCAM fails: 0
Fibs of Prefix length 25, with TCAM fails: 0
Fibs of Prefix length 26, with TCAM fails: 0
Fibs of Prefix length 27, with TCAM fails: 0
Fibs of Prefix length 28, with TCAM fails: 0
Fibs of Prefix length 29, with TCAM fails: 0
Fibs of Prefix length 30, with TCAM fails: 0
Fibs of Prefix length 31, with TCAM fails: 0
Fibs of Prefix length 32, with TCAM fails: 107973
Fibs of Prefix length 33, with TCAM fails: 0
As far as I understand, MAC address table is full, but what does this mean?
Pv4 unicast directly-connected routes: 528/4224 508/3998
Do I have a lot of connected routes? I dont have more than 20 connected routes
sh ip arp summary gives this:
1925 IP ARP entries, with 50 of them incomplete
What should I do to optimize it?
Thank you!
Hello Seb, thank you for your answer
SDM is
sh sdm prefer
The current template is "desktop access IPv4" template.
The selected template optimizes the resources in
the switch to support this level of features for
8 routed interfaces and 1024 VLANs.
number of unicast mac addresses: 4K
number of IPv4 IGMP groups + multicast routes: 1K
number of IPv4 unicast routes: 6K
number of directly-connected IPv4 hosts: 4K
number of indirect IPv4 routes: 2K
number of IPv4 policy based routing aces: 0.5K
number of IPv4/MAC qos aces: 0.5K
number of IPv4/MAC security aces: 2K
No, IPv6 traffic, no VTP. And yes, there are VLANs trunked but not used on edge ports since this is our Bridge in MST
The switch is edge/distribution.
Is it an SVI problem? Do I have to minimize them?
Similar Messages
-
Hi There,
I was wondering if you could help me aviod a situation where the limit of 100IP's is reach on my client new site using 3 x SG300 28P switches.
I have 1 x SG300 28P in Layer 3 mode which is the default gateway for all the IP phones that will be installed. The PC's ont he network will use the existing default gateway which is another router. I will have another 2 x SG300 28P devices in layer 2 mode which are connected to the Layer 3 SG300 28P.
My question - Are the IP's that registered against the TCAM limit only the devies which physically plug into the SG300 28P switches ? I assume other computers on the network which are plugged into another switch and don't use the default gateway of the SG300 (its only for voice) they then wouldn't be registered in the TCAM ?
The site has around 65 computers currently and obviously plugging in 65 IP phones we're going to hit a limit of over 100 IP's. My thoughts were to potentially keep the computers and Phones seperate on a couple of the switches to keep the IP's in the TCAM to a minimum.. Is this possible?
Any advice would be welcomed!
BrettHi Thomas,
Thanks for the quick reply.
Just to confirm though, I want to be sure that the Layer 3 SG300 28P will have have all the IP phones from the other Layer switches using it as the default gateway for the voice VLAN - Obviously this will then register 60 + IP addresses. If I have the computers plugged into the back of the Phones (which is then into the SG300 switches) this will then register another 60 IP's correct? If I don't patch these computers into the phones and have them in a seperate switch then the TCAM address list doesn't care about these computer IP's? I do believe we'll have traffic routing from the computer to the phones even if they are on a different switch so would that then add these addresses to the TCAM?
The reason I ask this to be clear is that I read someone else going over the 100 limit and causing the network to slow down which with voice traffic I want to avoid...
Brett -
How to show hardware tcam status via CLI ?
Dear expert,
We hv MDS 9513 with FAB 3 and 9248-256 L/C ,more than 1000 zones were added in a fabric, we want to know exact TCAM utilization ,and keep the zone capability safe, any input be highly appreciate .
Alex
Sent from Cisco Technical Support iPhone AppWe found a work around with a script which is connecting to the SC via telnet and checking the status of the Power Supply with the command "showboards".
Edited by: trige123 on Jun 15, 2008 2:07 AM -
Platform/versions:
# sh ver
kickstart: version 6.2(2)
system: version 6.2(2)
# sh mod
Mod Ports Module-Type Model Status
1 32 10 Gbps Ethernet Module N7K-M132XP-12 ok
2 48 1000 Mbps Optical Ethernet Module N7K-M148GS-11 ok
3 32 1/10 Gbps Ethernet Module N7K-F132XP-15 ok
4 32 1/10 Gbps Ethernet Module N7K-F132XP-15 ok
5 0 Supervisor Module-1X N7K-SUP1 ha-standby
6 0 Supervisor Module-1X N7K-SUP1 active *
I recently tried to add a couple of "ip dhcp relay address" statements to an SVI and received the following error message:
ERROR: Hardware programming failed. Reason: Tcam will be over used, please enable bank chaining and/or turn off atomic update
Studying this page I was able to determine, that I seemed to be hitting a 50% TCAM utilization limit on the F1 modules, which prevented atomic updates:
# show hardware access-list resource entries module 3
ACL Hardware Resource Utilization (Module 3)
Instance 1, Ingress
TCAM: 530 valid entries 494 free entries
IPv6 TCAM: 8 valid entries 248 free entries
Used Free Percent
Utilization
TCAM 530 494 51.75
I was able to workaround it be disabling atomic updates:
hardware access-list update default-result permit
no hardware access-list update atomic
I understand, that with this config I am theoretically allowing ACL traffic during updates which shouldn't be allowed (alternately I could drop it), but that's not really my primary concern here.
First of all I need to understand why adding DHCP relays are apparently affecting my TCAM entry resources?
Second, I need to understand if there are other implications of disabling atomic updates, such as during ISSU?
Third, What are my options - if any - for planning the usage of the apparently relatively scarce resources on the F1 modules?Could be CSCua13121:
Symptom:
Host of certain Vlan will not get IP address from DHCP.
Conditions:
M1/F1 Chassis with Fabric Path Vlan with atomic update enable.
On such system if configured SVI with DHCP is bounced that is if we shut down and bring it up may cause DHCP relay issue.
Workaround:
Disable Atomic update and bounce the Vlan. Disabling Atomic update may cause packet drops .
Except that ought to be fixed in 6.2(2). -
3560 CPU utilization constantly showing 87%
hello, we are using a 3560 switch and the cpu utilization is constantly showing 87 %.
This switch is doing routing and leased lines are connected to service providers.
Is there any trouble shooting steps that can be done?
Regards,
ShivaDisclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Inayath's document is nice, and in it he mentions show tcam utilization. If your TCAM overlfows, some packet forwarding will be done in CPU. So, that's an important check; insure you're using the most suitable SDM template. -
Monitor tcam usage , ASR's
Hi,
How to get tcam utilization by using snmp on ASR100x ?
Sent from Cisco Technical Support iPad AppIm also looking for this aswell, if i find anything ill let you know.
You arnt experincing the Deny-Jump TCAM Exhaustion, currently
investigated under CSCtz33305 by any chance are you? -
I don't suppose there is any chance that the SG300 TCAM utilization is contained in the MIB somewhere? I would like to find a way of monitoring it via SNMP.
Cheers,
Donald.Hello Smunzani, there is no difference in the switches aside the obvious, one has POE and one has 52 ports vs 28 ports.
To enable the L3, you may console the switch and use the command
"set system mode router"
You may also set the system mode through the menu if you're using the older 1.0.0.27 firmware (which does not support the textview CLI)
If you don't have access to the console, you may log in to the GUI and go to SECURITY -> TCP/UDP SERVICES and enable telnet then telnet the switch as well.
-Tom -
Extended acl - multiple ports on same acl line
hello
i'm working on a (long) acl and have started looking at putting multiple ports on the same line
e.g.
instead of:
ip access-list extended test3
permit tcp any host 10.10.10.1 eq 80
permit tcp any host 10.10.10.1 eq 443
i'd use:
ip access-list extended test3
permit tcp any host 10.10.10.1 eq 80 443
its shortening the acl considerably but the question is:
does this method reduce the TCAM resources required (compared to writing the acl in long hand)?
what are the maximum number of ports that can be included on the same line - is it platform/ios dependant?
thanks
andyHello
No. I went ahead with the acl with multiple ports in each ACE and it worked fine. It was deployed on an old WS-C3750G-24PS-E and worked pretty well. When I checked the tcam on the switch I got the following output:
Cisco3750#show platform tcam utilization
CAM Utilization for ASIC# 0 Max Used
Masks/Values Masks/values
IPv4 security aces: 1024/1024 33/33
Note: Allocation of TCAM entries per feature uses
a complex algorithm. The above information is meant
to provide an abstract view of the current TCAM utilization
As there were other ACLs on the switch it was difficult to gauge if the multiple ports per ACE approach to ACLs actually saved any TCAM resources. If you find anything out post back - I'd be interested to hear.
thanks
Andy -
I have a 7600 running 12.2(33)SRE1. I was wondering why mls nde export statistics aren't incrementing.
I have the following configured:
ip flow-export source Loopback3
ip flow-export version 5
ip flow-export destination 192.168.2.200 9995
mls flow ip interface-full
no mls flow ipv6
mls nde sender
mls sampling time-based 512
I have Vlan interfaces with ip flow ingress configured.
interface Vlan804
ip address 192.168.4.1 255.255.255.252
no ip redirects
no ip unreachables
no ip proxy-arp
ip verify unicast source reachable-via any allow-default
ip flow ingress
load-interval 30
end
A show ip flow export shows me exported flows... the counters increment
RTR7600#show ip flow export
Flow export v5 is enabled for main cache
Export source and destination details :
VRF ID : Default
Source(1) 192.168.100.1 (Loopback3)
Destination(1) 192.168.2.200 (9995)
Version 5 flow records
315756904 flows exported in 10536943 udp datagrams
0 flows failed due to lack of export packet
0 export packets were sent up to process level
0 export packets were dropped due to no fib
0 export packets were dropped due to adjacency issues
0 export packets were dropped due to fragmentation failures
0 export packets were dropped due to encapsulation fixup failures
0 export packets were dropped enqueuing for the RP
0 export packets were dropped due to IPC rate limiting
0 export packets were dropped due to Card not being able to export
A show mls nde shows me nothing
RTR7600#show mls nde
Netflow Data Export enabled
Exporting flows to 192.168.2.200 (9995)
Exporting flows from 192.168.100.1 (62867)
Version: 7
Layer2 flow creation is disabled
Layer2 flow export is disabled
Include Filter not configured
Exclude Filter not configured
Total Netflow Data Export Packets are:
0 packets, 0 no packets, 0 records
Total Netflow Data Export Send Errors:
IPWRITE_NO_FIB = 0
IPWRITE_ADJ_FAILED = 0
IPWRITE_PROCESS = 0
IPWRITE_ENQUEUE_FAILED = 0
IPWRITE_IPC_FAILED = 0
IPWRITE_OUTPUT_FAILED = 0
IPWRITE_MTU_FAILED = 0
IPWRITE_ENCAPFIX_FAILED = 0
IPWRITE_CARD_FAILED = 0
Netflow Aggregation Disabled
Do the versions need to match? Is that what is preventing the mls nde export? Any suggestions or tips for troubleshooting this?
show mls netflow table-contention summary
Earl in Module 1
Summary of Netflow CAM Utilization (as a percentage)
====================================================
TCAM Utilization : 0%
ICAM Utilization : 0%
Netflow Creation Failures : 0
Netflow CAM aliases : 0
Earl in Module 2
Summary of Netflow CAM Utilization (as a percentage)
====================================================
TCAM Utilization : 19%
ICAM Utilization : 0%
Netflow Creation Failures : 0
Netflow CAM aliases : 0
Earl in Module 3
Summary of Netflow CAM Utilization (as a percentage)
====================================================
TCAM Utilization : 56%
ICAM Utilization : 0%
Netflow Creation Failures : 0
Netflow CAM aliases : 0
Earl in Module 5
Summary of Netflow CAM Utilization (as a percentage)
====================================================
TCAM Utilization : 16%
ICAM Utilization : 0%
Netflow Creation Failures : 0
Netflow CAM aliases : 0
Earl in Module 6
Summary of Netflow CAM Utilization (as a percentage)
====================================================
TCAM Utilization : 0%
ICAM Utilization : 0%
Netflow Creation Failures : 0
Netflow CAM aliases : 0
Thank you,
DannyI did set the mls nde sender version to match up with netflow, but the real resolution to this problem was an extra command that was needed. I have time-based sampling turned on globally and I'm running a version of 1.2(33)SR code above SRB. You have to turn on "mls netflow sampling" under the layer 3 interfaces that you want nde export under for newer version of code. In the past, you could enable sampling globally, and it would work. Now you have to enable sampling globally AND turn on sampling under each interface. The code I'm running is SRE1.
-
Hi
After upgrade to 15.3(3)S and on one switch with 15.3(1)S1 i got error
(config)#interface GigabitEthernet0/11
(config-if)#service-policy output qos-test
Qos: Out of tcam resources to execute command
QoS: Policy attachment failed. Configuration exceeds hardware resources
for policy qos-test
#show logging
Nov 21 15:47:28: %QOSMGR-3-TCAM_EXHAUSTION: Internal Error in resource
allocation
bg-ba-m-1#show platform tcam utilization qos
Nile Tcam Utilization per Application & Region:
ES == Entry size == Number of 80 bit TCAM words
==================================================================
App/Region Start Num Avail ES Used Range Num Used
==================================================================
QOS 36864 4096 2
nile0 13
nile1
(config-if)#service-policy output qos-test
Qos: Out of tcam resources to execute command
QoS: Policy attachment failed. Configuration exceeds hardware resources
for policy qos-test
#show logging
Nov 21 15:47:28: %QOSMGR-3-TCAM_EXHAUSTION: Internal Error in resource
allocation
#show platform tcam utilization qos
Nile Tcam Utilization per Application & Region:
ES == Entry size == Number of 80 bit TCAM words
==================================================================
App/Region Start Num Avail ES Used Range Num Used
==================================================================
QOS 36864 4096 2
nile0 13
nile1 6Hi startx001, hope you will be fine.
Did you a find a solution for this issue?
I'm facing same problem -
Instant Access - static sharing of ACLs
Hello
I'm looking to deploy 802.1x/mab on an Instant Access 1000 interface deployment. PACLs/dACLs will be used for security. Many of these ACLs will be identical and I found the following document on static sharing of ACLs to keep tcam utilization down:
http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6500/ios/12-2SX/configuration/guide/book/dot1x.html#wp1133455
Instant Access parent switch is a C6807-XL (151-2.SY4a) VSS pair using WS-X6904-40G for the IA FEX links
The 6807 doesn't have the commands below (referenced in above document)
mls acl tcam share-acl
platform hardware acl downloadable setup static
I have found the following command on the 6807:
platform feature-manager acl downloadable setup static
Is there an equivalent of "mls acl tcam share-acl" on the C6807-XL (151-2.SY4a) to enable static sharing of ACLs?
Thanks
AndyHello.
My understanding is traffic from the inside to lower security interfaces does not require the access-list and access-group command.
That said removing an entire acl removers the access-group command.
apply
access-group acl_inside in interface inside.
I'm not sure if the same applies for other interfaces wishing to access lower security interfaces.
You can consider yourself lucky :)
Tim -
i delete all files in c:\system\install on nokia 3650 and my phone now just can call! it means i can't open the menu! can't read sms! i test the codes: *#7370* and *#7780* and i tried to startup in safe mode (press pencil+green) and (green+3+*) but my problem doesn't solved! help meeeeeeeeeeeeeee! plz!
i think i must reinstall Symbian OS! but how? with a cable? and a software? which software? :-(Hey,
Check the TCAM utilization and sdm template you are using on the box. Share the following outputs:
#show plat tcam utilization
#show sdm prefer
HTH.
Regards,
RS. -
Dear All,
We installed cat3k_caa-universalk9.SPA.03.03.00.SE.150-1.EZ.bin IOS version in our 3650-24TDS.
We configured MAC ACL on each port, We are trying to add 1000+ permit lines for extended MAC ACL, when we are add more than 600 or 650 permit lines we are getting error below.
“Aug 7 05:48:37.732: %ACL_ERRMSG-4-UNLOADED: 1 fed: Input MAC Port ACL on interface Gi1/0/1 for label 4 on asic0 could not be programmed in hardware and traffic will be dropped.”
Kindly requesting help!Hey,
Check the TCAM utilization and sdm template you are using on the box. Share the following outputs:
#show plat tcam utilization
#show sdm prefer
HTH.
Regards,
RS. -
Report for calculating capacity utilization and Efficency
Hi,
We are following REM in our company. The production line is defined in the production version. While backflushing the production line is called automatically and hence backflushing is done.
We calculate the capacity utilization by using the formulae.
Capacity Utilization = (Backflushed Qty/ Available capacity)*100.
My queries are:
1. Is there any standard report to determine the capacity utilization of a production line.
2. Is there any standard report to calcualte the efficency of a production line.
waiting for reply.
With regards,
AfzalHi afzal
1. you have mentioned ; Available capacity = Std.time per piece * no. of working hrs
Let me explain with example
suppose per piece if it takes 10 mins, now according to your formula
A.C = 10* 24 * 60 = 14400 per day, but which is not correct
normally if 10 mins/ peice means 6 peices/hr and for 24 hrs 24*6 = 144.
so it must be A.C = no. of working hrs / Std.time per piece.
2. You have mentioned = capacity utlilised = total Backflushed qty per day., which means you are caluculating capacity utilization based on input material.
3. Utilization = (Avaliable capacity/ Capacity utilised) * 100
suppose let us consider Available capacity per day =100
capacity utilized = 50
Utilization = (100/50) * 100 = 200%, which is not correct it should be only 50%
Here my main doubt is why You are caluculating capacity based on input material.
Please explanin me you business process and whats the exact requirement so that I can help you out.
Please check the formulae -
Follow up on an old thread about memory utilization
This thread was active a few months ago, unfortunately its taken me until now
for me to have enough spare time to craft a response.
From: SMTP%"[email protected]" 3-SEP-1996 16:52:00.72
To: [email protected]
CC:
Subj: Re: memory utilization
As a general rule, I would agree that memory utilzation problems tend to be
developer-induced. I believe that is generally true for most development
environments. However, this developer was having a little trouble finding
out how NOT to induce them. After scouring the documentation for any
references to object destructors, or clearing memory, or garbage collection,
or freeing objects, or anything else we could think of, all we found was how
to clear the rows from an Array object. We did find some reference to
setting the object to NIL, but no indication that this was necessary for the
memory to be freed.
I believe the documentation, and probably some Tech-Notes, address the issue of
freeing memory.
Automatic memory management frees a memory object when no references to the
memory
object exist. Since references are the reason that a memory object lives,
removing
the references is the only way that memory objects can be freed. This is why the
manuals and Tech-Notes talk about setting references to NIL (I.E. freeing memory
in an automatic system is done by NILing references and not by calling freeing
routines.) This is not an absolute requirement (as you have probably noticed
that
most things are freed even without setting references to NIL) but it accelerates
the freeing of 'dead' objects and reduces the memory utilization because it
tends
to carry around less 'dead' objects.
It is my understanding that in this environment, the development tool
(Forte') claims to handle memory utilization and garbage collection for you.
If that is the case, then it is my opinion that it shoud be nearly
impossible for the developer to create memory-leakage problems without going
outside the tool and allocating the memory directly. If that is not the
case, then we should have destructor methods available to us so that we can
handle them correctly. I know when I am finished with an object, and I
would have no problem calling a "destroy" or "cleanup" method. In fact, I
would prefer that to just wondering if Forte' will take care of it for me.
It is actually quite easy to create memory leaks. Here are some examples:
Have a heap attribute in a service object. Keep inserting things into
the heap and never take them out (I.E. forgot to take them out). Since
service objects are always live, everything in the heap is also live.
Have an exception handler that catches exceptions and doesn't do
anything
with the error manager stack (I.E. it doesn't call task.ErrMgr.Clear).
If the handler is activated repeatedly in the same task, the stack of
exceptions will grow until you run out of memory or the task terminates
(task termination empties the error manager stack.)
It seems to me that this is a weakness in the tool that should be addressed.
Does anyone else have any opinions on this subject?
Actually, the implementation of the advanced features supported by the Forte
product
results in some complications in areas that can be hard to explain. Memory
management
happens to be one of the areas most effected. A precise explanation to a
non-deterministic process is not possible, but the following attempts to
explain the
source of the non-determinism.
o The ability to call from compiled C++ to interpreted TOOL and back
to compiled C++.
This single ability causes most of the strange effects mentioned in
this thread.
For C++ code the location of all variables local to a method is not
know
(I.E. C++ compilers can't tell you at run-time what is a variable
and what
isn't.) We use the pessimistic assumption that anything that looks
like a
reference to a memory object is a reference to a memory object. For
interpreted
TOOL code the interpreter has exact knowledge of what is a reference
and what
isn't. But the TOOL interpreter is itself a C++ method. This means
that any
any memory objects referenced by the interpreter during the
execution of TOOL
code could be stored in local variables in the interpreter. The TOOL
interpreter
runs until the TOOL code returns or the TOOL code calls into C++.
This means
that many levels of nested TOOL code can be the source of values
assigned to
local variables in the TOOL interpreter.
This is the complicated reason that answers the question: Why doesn't a
variable that is created and only used in a TOOL method that has
returned
get freed? It is likely that the variable is referenced by local
variables
in the TOOL interpreter method. This is also why setting the
variable to NIL
before returning doesn't seem to help. If the variable in question is a
Array than invoke Clear() on the Array seems to help, because even
though the
Array is still live the objects referenced by the Array have less
references.
The other common occurrence of this effect is in a TextData that
contains a
large string. In this case, invoking SetAllocatedSize(0) can be used
to NIL
the reference to the memory object that actually holds the sequence of
characters. Compositions of Arrays and TextData's (I.E. a Array of
TextData's
that all have large TextDatas.) can lead to even more problems.
When the TOOL code is turned into a compiled partition this effect
is not
noticed because the TOOL interpreter doesn't come into play and
things execute
the way most people expect. This is one area that we try to improve
upon, but it is complicated by the 15 different platforms, and thus
C++ compilers,
that we support. Changes that work on some machines behave
differently on other
machines. At this point in time, it occasionally still requires that
a TOOL
programmer actively address problems. Obviously we try to reduce
this need over
time.
o Automatic memory management for C++ with support for multi-processor
threads.
Supporting automatic memory management for C++ is something that is
not a very
common feature. It requires a coding standard that defines what is
acceptable and
what isn't. Additionally, supporting multi-processor threads adds
its own set of
complications. Luckily TOOL users are insulated from this because
the TOOL to C++
code generator knows the coding standard. In the end you are
impacted by the C++
compiler and possibly the differences that occur between different
compilers and/or
different processors (I.E. Intel X86 versus Alpha.) We have seen
applications that
had memory utilization differences of up to 2:1.
There are two primary sources of differences.
The first source is how compilers deal with dead assignments. The
typical TOOL
fragment that is being memory manager friendly might perform the
following:
temp : SomeObject = new;
... // Use someObject
temp = NIL;
return;
When this is translated to C++ it looks very similar in that temp
will be assigned the
value NULL. Most compilers are smart enough to notice that 'temp' is
never used again
because the method is going to return immediately. So they skip
setting 'temp' to NULL.
In this case it should be harmless that the statement was ignored
(see next example for a different variation.) In more
complicated examples that involve loops (especially long
lived event loops) a missed NIL assignment can lead to leaking the
memory object whose
reference didn't get set to NIL (incidentally this is the type of
problem that causes
the TOOL interpreter to leak references.)
The second source is a complicated interaction caused by history of
method invocations.
Consider the following:
Method A() invokes method B() which invokes method C().
Method C() allocates a temporary TextData, invokes
SetAllocatedSize(1000000)
does some more work and then returns.
Method B() returns.
Method A() now invokes method D().
Method D() allocates something that cause the memory manager to look
for memory objects to free.
Now, even though we have returned out of method C() we have starting
invoking
methods. This causes us to use re-use portions of the C++ stack used to
maintain the history of method invocation and space for local variables.
There is some probability that the reference to the 'temporary' TextData
will now be visible to the memory manager because it was not overwritten
by the invocation of D() or anything invoked by method D().
This example answers questions of the form: Why does setting a local
variable to
NIL and returning and then invoking task.Part.Os.RecoverMemory not
cause the
object referenced by the local variable to be freed?
In most cases these effects cause memory utilization to be slightly
higher
than expected (in well behaved cases it's less than 5%.) This is a small
price to pay for the advantages of automatic memory management.
An object-oriented programming style supported by automatic memory
management makes it
easy to extended existing objects or sets of objects by composition.
For example:
Method A() calls method B() to get the next record from the
database. Method B()
is used because we always get records, objects, of a certain
type from
method B() so that we can reuse code.
Method A() enters each row into a hash table so that it can
implement a cache
of the last N records seen.
Method A() returns the record to its caller.
With manual memory management there would have to be some interface
that allows
Method A() and/or the caller of A() to free the record. This
requires
that the programmer have a lot more knowledge about the
various projects
and classes that make up the application. If freeing doesn'
happen you
have a memory leak, if you free something while its still
being used the
results are unpredictable and most often fatal.
With automatic memory management, method A() can 'free' its
reference by removing
the reference from the hash table. The caller can 'free' its
reference by
either setting the reference to NIL or getting another
record and referring
to the new record instead of the old record.
Unfortunately, this convenience and power doesn't come for free. Consider
the following,
which comes from the Forte' run-time system:
A Window-class object is a very complex beast. It is composed of two
primary parts:
the UserWindow object which contains the variables declared by the
user, and the
Window object which contains the object representation of the window
created in
the window workshop. The UserWindow and the Window reference each
other. The Window
references the Menu and each Widget placed on the Window directly. A
compound Window
object, like a Panel, can also have objects place in itself. These
are typically
called the children. Each of the children also has to know the
identity of it's
Mom so they refer to there parent object. It should be reasonably
obvious that
starting from any object that make up the window any other object
can be found.
This means that if the memory manager finds a reference to any
object in the Window
it can also find all other objects in the window. Now if a reference
to any object
in the Window can be found on the program stack, all objects in the
window can
also be found. Since there are so many objects and the work involved
in displaying
a window can be very complicated (I.E. the automatic geometry
management that
layouts the window when it is first opened or resized.) there are
potentially many
different reference that would cause the same problem. This leads to
a higher than
normal probability that a reference exists that can cause the whole
set of Window
objects to not be freed.
We solved this problem in the following fashion:
Added a new Method called RecycleMemory() on UserWindow.
Documented that when a window is not going to be used again
that it is
preferably that RecycleMemory() is invoked instead
of Close().
The RecycleMemory() method basically sets all references
from parent to
child to NIL and sets all references from child to
parent to NIL.
Thus all objects are isolated from other objects
that make up
the window.
Changed a few methods on UserWindow, like Open(), to check
if the caller
is trying to open a recycled window and throw an
exception.
This was feasible because the code to traverse the parent/child
relationship
ready existed and was being used at close time to perform other
bookkeeping
operations on each of the Widgets.
To summarize:
Automatic memory management is less error prone and more productive but
doesn't come totally for free.
There are things that the programmer can do that assists the memory
manager:
o Set object reference to NIL when known to be correct (this
is the
way the memory is deallocated in an automatic system.)
o Use methods like Clear() on Array and SetAllocatedSize()
on TextData to
that allow these objects to set their internal
references to NIL
when known to be correct.
o Use the RecycleMemory() method on windows, especially very
complicated
windows.
o Build similar type of methods into your own objects when
needed.
o If you build highly connected structures that are very
large in the
number of object involved think that how it might be
broken
apart gracefully (it defeats some of the purpose of
automatic
management to go to great lengths to deal with the
problem.)
o Since program stacks are the source of the 'noise'
references, try
and do things with less tasks (this was one of the
reasons that
we implemented event handlers so that a single task
can control
many different windows.)
Even after doing all this its easy to still have a problem.
Internally we have
access to special tools that can help point at the problem so that
it can be
solved. We are attempting to give users UNSUPPORTED access to these
tools for
Release 3. This should allow users to more easily diagnose problems.
It also
tends to enlighten one about how things are structured and/or point out
inconsistencies that are the source of known/unknown bugs.
Derek
Derek Frankforth [email protected]
Forte Software Inc. [email protected]
1800 Harrison St. +510.869.3407
Oakland CA, 94612I beleive he means to reformat it like a floppy disk.
Go into My Computer, Locate the drive letter associated with your iPod(normally says iPod in it, and shows under removable storage).
Right click on it and choose format - make sure to not have the "quick format" option checked. Then let it format.
If that doesnt work, There are steps somewhere in the 5th gen forum( dont have the link off hand) to try to use the usbstor.sys to update the USB drivers for the Nano/5th gen.
Maybe you are looking for
-
Txt message freezes while sending since iso7
When I go to send a txt message or reply to a message my phone is freezing during the sending process. Not delivering the txt. I have done the setting-general-reset-Reset Network settings, then it works for a little while, and again it freezes again
-
HCM PF: Create multiple new records while displaying old ones
Hiya (cross-posted to OSS & SDN), I'm having some difficulties when trying to create new records to IT0014 while displaying old records at the same time, hope you can help me. The problem in brief is that existing records are removed from display whe
-
News: JDBInsight 2.1EA 40 - XA Transaction Performance Analysis
JInspired has made available a new Early Access milestone build which previews its XA transaction analysis. JDBInsight is the only product on the market providing true in-depth JDBC resource transaction analysis in a J2EE environment. With its new XA
-
Importing Canon Picture Styles into Aperture (how to keep the metadata) RAW
I have a Canon 30D. Often I use the picture styles settings to shoot Monochromes. When I bring them up in Aperture to import they read as Monochromes (they are RAW) and as soon as I import them....Aperture strips it. This is pretty frustrating when I
-
Flash Lite applications event handling
Hi i am new to flash lite. I am basically a J2ME person.I just want to know how this scenario is handled in device. Flash application is running in the foreground on a Windows mobile device. Suddenly, another application becomes foreground say a call