Parameter configuration of memory management
Hi all,
I receive from SAP a Analysis Check that suggest me to modify some value parameter of Memory management.
SAP suggest me:
ztta/roll_area: 3000329(current value) -> 6500000(Recommended value)
rdisp/PG_SHM: 8192(current value) -> 16384(Recommended value)
ztta/short_area: 3200000(current value) -> 4000000(Recommended value)
My request is: Can I modify these parameters with value suggest by SAP in my environment?
Is there any documentation which explain some info?
This is the information of the productive system:
machinfo
CPU info:
12 Intel(R) Itanium 2 9000 series processors (1.6 GHz, 24 MB)
533 MT/s bus, CPU version C2
18 logical processors
Memory: 65339 MB (63.81 GB)
Firmware info:
Firmware revision: 7.44
FP SWA driver revision: 1.18
IPMI is supported on this system.
Invalid combination of manageability firmware has been installed on this system.
Unable to provide accurate version information about manageability firmware
Platform info:
Model: "ia64 hp superdome server SD64B"
Machine ID number: xxxxxx
Machine serial number: xxxxxx
OS info:
Nodename: hostaddress
Release: HP-UX B.11.31
Version: U (unlimited-user license)
Machine: ia64
ID Number: xxxx
vmunix releaseversion:
@(#) $Revision: vmunix: B.11.31_LR FLAVOR=perf
Hi,
I am sure that SAP must have recommanded these values after analysing your system. So you can anytime go for these values and what is good is....if you face any problem in future with these sets of values suggested by SAP, you can anytime escalaye back to SAP.
Please do not forget to take the backup of your Instance profile before making any change. Always adjust the values for pool 10 and pool 40 othwerwise your instance may not come up.
For more details about these parameters, please refer to below link.
http://help.sap.com/saphelp_nw04/helpdata/en/02/962acd538111d1891b0000e8322f96/frameset.htm
With Regards,
Saurabh
Similar Messages
-
Recommended parameter for memory management
Hi, i have a SAP ECC 5 installed on a server iseries with db2 as dbms, in
st02 there are some swaps so i would like to optimize the parameters
regarding the memory management. Is there which can give me the right suggestion?
Bye
LucaHi Luca,
if you do see some swaps, please increase these values - this is application dependent (and has nothing to do with DB2).
As you are on 64 bit, you can increase without any problem ...
Regards
Volker Gueldenpfennig, consolut.gmbh
http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de -
Anyone use nio-memory-manager ?? what's it good for?
Can someone give me an example of when the nio-memory-manager should be used?
Thanks,
AndrewIf I remember the outcome of my experiments with NIO right the situation is as follows:
1. Allocating/releasing huge shared memory blocks over and over can lead to OS/JVM issues. To avoid this I allocated the max size I wanted from the start (this is an option when configuring "off-heap" storage I believe). When doing it this way I had no reliability issues with the NIO memory manager in my tests.
2. Tangosol/Oracle used to claim that the off-heap (NIO memory manager) result in worse performance than on-heap - I could not see any clear indication of this but this may be application dependent. For our app the reduced number of JVM:s per server (reducing network communication, number of threads, risk of any JVM performing GC at a given time etc etc) seemed to more than offset the allegedly slower memory manager resulting in MUCH BETTER performance! A lot of queries etc anyhow (at least for us) mainly work against indexes that always are stored "on-heap"...
3. There is a limitation to 2Gb per NIO block (at least in 32-bit JVM:s not sure about 64:bit - never seen any point in using them since larger heaps than 2Gb seldom work well anyhow and each pointer consumes double the space in heap and CPU-caches) but this is for each CACHE and separate for PRIMARY and BACKUP I believe! So my understanding is that if you (using 64-bit OS) for instance have two (equally big) caches you could allocate max 2 * 2 * 2 = 8Gb of off-heap memory for folding data per JVM (without ANY impact on GC-pauses!) and in addition to that use as much heap as you can get away with (given GC-pause times) for holding the indexes to that data. This would makes a huge difference in JVM count!- for example we today have to run like 10+ JVM:s per server using "on-heap" while we using "off-heap" storage probably could get that down to one or two JVM:s per server!
4. There may be both OS and JVM parameter that you need to set (depending on OS and JVM used!) in order to allocate large amounts of shared memory using NIO (the default is rather small).
As for the question about de-allocation I never saw any sign of memory leaks with the NIO memory manager (i.e. space previously occupied by deleted objects were reused for new objects) but as I mentioned above you better allocating the max size NIO memory block you intend to use up-front and that memory will then remain allocated for this use so if your amount of cache data vary and you would like to use memory for other purposes (like heap!) at some point you may be better of sticking with "on-heap" that is more flexible in that respect.
As I previously mentioned off-heap is today (until Oracle fixes the improvement request!) really only an option if you do not plan to use "overflow protection" or your objects are fixed size :-(
And if you are interested in using servers with a lot of memory and would like to use "off-heap" please talk to your Oracle sales rep about it! If enough people do that it may allow the Coherence developers to assign more time for making "off-heap" storage better! With this feature in place Coherence will be even more of a "killer application" than it already is!
Best Regards
Magnus -
In 11g, How to Enable Automatic Shared Memory Management (ASMM)
hi experts,
I have a new 11.2 g database and I want to configure it to use ASMM.
To enable ASMM, should I assign a non-zero size to the Memory_Target parameter or the SGA_Target ? I have read conflicting statements.
Thanks, JohnIf you mean Automatic Memory Management, pl see
http://download.oracle.com/docs/cd/E11882_01/server.112/e10595/memory003.htm#ADMIN11011
http://download.oracle.com/docs/cd/E11882_01/server.112/e10897/instance.htm#ADMQS12039
HTH
Srini -
Confusion about Automatic Shared Memory Management
Hi,
Oracle Database 10g includes the Automatic Shared Memory Management feature which simplifies the SGA memory management significantly. To use Automatic Shared Memory Management, we have to set the SGA_TARGET initialization parameter to a nonzero value and the STATISTICS_LEVEL initialization parameter to TYPICAL or ALL.
Oracle Database 10g Rel. 2 documentation, in some places, says that:
If SGA_TARGET is specified, then the following FIVE memory pools are automatically sized:
* Buffer cache (DB_CACHE_SIZE)
* Shared pool (SHARED_POOL_SIZE)
* Large pool (LARGE_POOL_SIZE)
* Java pool (JAVA_POOL_SIZE)
* Streams pool (STREAMS_POOL_SIZE)
Ref.:
1. http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams192.htm
2. http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14220/memory.htm
3. Oracle Database 10g: New Features for Administrators - Student Guide
But in some places I found the following:
If SGA_TARGET is specified, then the buffer cache (DB_CACHE_SIZE), Java pool (JAVA_POOL_SIZE), large pool (LARGE_POOL_SIZE), and shared pool (SHARED_POOL_SIZE) memory pools are automatically sized.
Here you can see that Streams Pool is not included in the automatically sized pools.
Ref.:
1. http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14211/build_db.htm#sthref252
Also, according to Oracle Press' Book "OCP Oracle Database 10g: New Features for Administrators Exam Guide:
Under Automatic Shared Memory Management, the database manages the
following FOUR major components of the SGA, also known as the auto-tuned SGA
parameters:
■ Buffer cache (DB_CACHE_SIZE)
■ Shared pool (SHARED_POOL_SIZE)
■ Large pool (LARGE_POOL_SIZE)
■ Java pool (JAVA_POOL_SIZE)
It is important to understand that even under Automatic Shared Memory
Management, you still need to configure any SGA component other than the four
auto-tuned components. Following are the manually sized components of the SGA:
■ Redo Log Buffer
■ The KEEP and RECYCLE buffer caches (if specified)
■ The nonstandard block size buffer caches (if specified)
■ The new Streams pool SGA component
■ The new Oracle Storage Management (OSM) buffer cache, which is meant
for the optional ASM instance
Now my question is "IS Streams Pool an auto-tuned SGA parameter?"
Thanks in advance.
--Khan.Hi,
I would advise you to read Document I.D. Note:295626.1 on Oracle Metalink.
It states that
When enabled, it lets Oracle decide of the right size for some components of the SGA:
SHARED POOL
LARGE POOL
JAVA POOL
DB CACHE (using the DB_BLOCK_SIZE value)
The SGA_TARGET value will therefore define the memory size sharable between auto-tuned and manual parameters.
The manual parameters are:
DB_<KEEP/RECYCLE>CACHESIZE
DB_nK_CACHE_SIZE (non default block size)
LOG_BUFFER
FIXED SGA
STREAMS_POOL_SIZE
Adith -
Help me configure Change request management !!!
Dear friends,
I am Going to Configure Change request Management, so just to ensure that the configuration is not erronous, i would need Expert advise..
Just want to know Clear few things before i proceed..
I am also refering SPRO and related notes
Scenario :
I have two SYSTEMS SAP ECC 6.0 with System id R03 and Soluiton manager with SYSTEM id SOL,
R03 has 3 clients, 300 600 700..
In R03 300 is the development client, 600 is quality client, 700 is the production client.
SOL has 2 clients, 100, 200
With 200 as the production client.
Q.1) <b>Do i have to configure CHARM in both the client (100 and 200 of SOLMAN).</b>
Q.2) Initially I had tried to set CHARM in client 100 of solman, but later on realized that it has to be set up in client 200.
When i logon to client 200 and Execute IMG activity Spro-> sap soltion manger->basic settings-> sap solution manager system->activate integration with change request management.
Then by default it take the previous client ( client 100) as the change request management client.
( as we know there are three steps in the above activity ), the other activity are executed properly, only prblem being that the default client is always set to 100, which should not be the case).
I do get the prompt saying ( "The change request clent is set to clent 100, do u want to change to client 200, on clicking yes, still it is always set the same client 100 as charm client ")
<b>Plz let me know what do i do to set the change request client to 200??</b>
Q.3) Regarding TMS, we have local domain controller in solman and local domain in R3.
We are planing to establish domain links between the two systems( ie both the domain controllers) ??
Is this the right strategy ??
<b>Any other method that u can recommend ??</b>
Q.4)One of the IMG activity says, Generate Destinations to client 000 of all the domain controllers..
Whenever i do this these, destinations are created with errors, i am not able to create trusted RFC destinations without errors.
When i logon to satellite domain controler and excecute sm59 there are 2 destinations created Trusted and BACK.
These destinations works well,
but when i logon to Solman, got to sm59 , when i test the TMW and TRUSTED rfc destinations i test these destinations using Remote Logon i get error,
" no authorization to logon as trusted system"
I went thru one note which recomended Kernel upgrades to solve the problem,
I r3 my kernel relaese is 700 with patch level 56, the note recomends to apply patch 80, did u have these problems??
<b>what is your kernel patch levels in sateliite and solman systems.</b>
Q.5) TO be able to raise tickets from R3 to solman we create RFC destinations.
We also create RFC destinations to client 000 of all the sateliite system,
<b>dont u think these RFC destinations might interfere with each other??</b>
Q.6) Is there anyone who has successfully configured CHARM. Can you plz share the configuration documents with me..
Please note :
<b>All the contributors would be handesomely rewarded with points .</b>Hi,
Check this
Note 128447 - Trusted/Trusting Systems
For your Q4.
Q3.)
Establishing Domain link - That's the right way. Go ahead.
These are the steps.
<b>1.Define Transport Routes for System Landscape</b>
assign exactly one development system to a production system, and that these two systems are connected by exactly one unique transport track. If a development system and a production system are connected by more than one transport track, this may lead to inconsistencies within the transport distribution. This type of transport configuration cannot be supported by Change Request Management, and may cause inconsistencies within the tools involved.
<b>2. Activate Extended Transport Control</b>
The CTC parameter should be '1'
<b>3.Configure Transport Strategy</b>
Deactivate the QA Approval.
<b>4. Activate Trusted Services.</b>
5.Activate Domain Links.
You have to activate domain link between systems.
6. Generate RFC Destinations to Client 000
Hope this helps.
feel free to revert back.
--Ragu -
Questions about db_keep_cache_size and Automatic Shared Memory Management
Hello all,
I'm coming upon a server that I'm needing to pin a table and some objects in, per the recommendations of an application support call.
Looking at the database, which is a 5 node RAC cluster (11gr2), I'm looking to see how things are laid out:
SQL> select name, value, value/1024/1024 value_MB from v$parameter
2 where name in ('db_cache_size','db_keep_cache_size','db_recycle_cache_size','shared_pool_size','sga_max_size');
NAME VALUE VALUE_MB
sga_max_size 1694498816 1616
shared_pool_size 0 0
db_cache_size 0 0
db_keep_cache_size 0 0
db_recycle_cache_siz 0 0
e
Looking at granularity level:
SQL> select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';
GRANULE_SIZE/VALUE
2048
Then....I looked, and I thought this instance was set up with Auto Shared Mem Mgmt....but I see that sga_target size is not set:
SQL> show parameter sga
NAME TYPE VALUE
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1616M
sga_target big integer 0
So, I'm wondering first of all...would it be a good idea to switch to Automatic Shared Memory Management? If so, is this as simple as altering system set sga_target =...? Again, this is on a RAC system, is there a different way to do this than on a single instance?
If that isn't the way to go...let me continue with the table size, etc....
The table I need to pin is:
SQL> select sum (blocks) from all_tables where table_name = 'MYTABLE' and owner = 'MYOWNER';
SUM(BLOCKS)
4858
And block size is:
SQL> show parameter block_size
NAME TYPE VALUE
db_block_size integer 8192
So, the space I'll need in memory for pinning this is:
4858 * 8192 /1024/1024 = 37.95.......which is well below my granularity mark of 2048
So, would this be as easy as setting db_keep_cache_size = 2048 with an alter system call? Do I need to set db_cache_size first? What do I set that to?
Thanks in advance for any suggestions and links to info on this.
cayenne
Edited by: cayenne on Mar 27, 2013 10:14 AM
Edited by: cayenne on Mar 27, 2013 10:15 AMJohnWatson wrote:
This is what you need,alter system set db_keep_cache_size=40M;I do not understand the arithmetic you do here,select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';it shows you the number of buffers per granule, which I would not think has any meaning.I'd been looking at some different sites studying this, and what I got from that, was that this granularity gave you the minimum you could set the db_keep_cache_size, that if you tried setting it below this value, it would be bumped up to it, and also, that each bump you gave the keep_cache, would be in increments of the granularity number....?
Thanks,
cayenne -
Blue Screen of Death - Memory Management
I have gotten the blue screen of death twice now. The screen went by really fast but I think it said something about memory, like memory management. I copied the problem details:
Problem signature:
Problem Event Name: BlueScreen
OS Version: 6.1.7601.2.1.0.768.3
Locale ID: 4105
Additional information about the problem:
BCCode: 1a
BCP1: 0000000000041790
BCP2: FFFFFA80022B8DF0
BCP3: 000000000000FFFF
BCP4: 0000000000000000
OS Version: 6_1_7601
Service Pack: 1_0
Product: 768_1
Files that help describe the problem:
C:\Windows\Minidump\071314-34335-01.dmp
C:\Users\Tiffany Jiang\AppData\Local\Temp\WER-53804-0.sysdata.xml
Both times happened randomly and about 2 weeks apart, and the laptop (Toshiba Satellite; it was bought brand new about only 3 weeks ago) still works fine after it restarts, but I don't want it to keep happening. I also don't know what the problem is. I used my previous Windows 7 laptop (HP pavillion g6) the same way for 3 years and it never blue screened even once. I tried going to the Toshiba support website to see if I need to download any new drivers but the website does not seem to be working on my laptop, everytime I go on it it's just lines of letter and numbers.
Help would be much appreciated, thank you!Satellite C55-A5195
Downloads here.
Bug Check 0x19: BAD_POOL_HEADER
Caused By Driver : vsdatant.sys <- True Vector device driver
You have ZoneAlarm on board? If so, uninstall it and see what happens.
==================================================
Dump File : 071314-34335-01.dmp
Crash Time : 7/15/2014 9:58:18 AM
Bug Check String : MEMORY_MANAGEMENT
Bug Check Code : 0x0000001a
Parameter 1 : 00000000`00041790
Parameter 2 : fffffa80`022b8df0
Parameter 3 : 00000000`0000ffff
Parameter 4 : 00000000`00000000
Caused By Driver : ntoskrnl.exe
Caused By Address : ntoskrnl.exe+75bc0
File Description :
Product Name :
Company :
File Version :
Processor : x64
Computer Name :
Full Path : C:\Test\071314-34335-01.dmp
Processors Count : 4
Major Version : 15
Minor Version : 7601
==================================================
==================================================
Dump File : 070514-42837-01.dmp
Crash Time : 7/15/2014 9:58:18 AM
Bug Check String : BAD_POOL_HEADER
Bug Check Code : 0x00000019
Parameter 1 : 00000000`00000020
Parameter 2 : fffffa80`0b390140
Parameter 3 : fffffa80`0b390160
Parameter 4 : 00000000`04020008
Caused By Driver : vsdatant.sys
Caused By Address : vsdatant.sys+47054
File Description :
Product Name :
Company :
File Version :
Processor : x64
Computer Name :
Full Path : C:\Test\070514-42837-01.dmp
Processors Count : 4
Major Version : 15
Minor Version : 7601
==================================================
-Jerry -
Configuration for Transaction Management
Hi,
I am working with Weblogic Server SP1. I am facing a problem in configuring for
Transaction Management.
I have a session EJB say SEJB and two entity EJB say EEJB1 and EEJB2. EEJB1 is
for the parent table
and EEJB2 is for the child table.
I have two records in the database REC1 and REC2.
REC2 has dependencies and cannot be deleted, while REC1 can be deleted.
In weblogic-ejb-jar.xml I have configured as follows:
<weblogic-enterprise-bean>
<ejb-name>SEJB</ejb-name>
<stateless-session-descriptor>
<pool>
<max-beans-in-free-pool>300</max-beans-in-free-pool>
<initial-beans-in-free-pool>150</initial-beans-in-free-pool>
</pool>
</stateless-session-descriptor>
<reference-descriptor>
<ejb-reference-description>
<ejb-ref-name>EEJB</ejb-ref-name>
<jndi-name>EEJBean</jndi-name>
</ejb-reference-description>
</reference-descriptor>
<jndi-name>SEJBn</jndi-name>
</weblogic-enterprise-bean>
Further, in ejb-jar.xml I have set up the <trans-attribute> as RequiresNew for
Session Bean while Supports
for the EEJB. Something like this:...
<container-transaction>
<method>
<ejb-name>SEJB</ejb-name>
<method-intf>Remote</method-intf>
<method-name>*</method-name>
</method>
<trans-attribute>RequiresNew</trans-attribute>
</container-transaction>
In spite of this setting, when, through the client, I am selecting the two records
REC1 and REC2 at the same
time and deleting them, REC1 gets deleted while REC2 does not and gives a TransactionRollbackException.
Ideally, since both are part of a single transaction, both should have been rolled
back.
Please suggest if I am missing on some kind of configuration parameter or setting.
I'll be more than
happy to provide some more details to get the problem solved.
I can also be reached at [email protected]
Thanks in advance,
Regards,
Rishi
TCode: SWF5
Enterprise_Extensions:
-> EA-FS
Enterprise_Business_Functions:
-> FIN_TRM*
Rg
Lorenz -
Oracle 9i Automatic PGA Memory Management
Hello,
my team and me, we are facing difficulties to change the size of the PGA used by our server processes for HASH JOIN, SORT... operators,
here you can see the results of "select * from v$pgastat":
[pgastat dynamic view results|http://pastebin.com/m210314dc]
We have been increasing consecutively our pga_aggregate_target parameter from 1.7 Gb initially to 4Gb then at the end 6Gb, the value of "Global memory bound" and " aggregate pga auto target" on the link above are still equal to 0.
I have been reading threads on the forum and documentation see below, I understand how the global memory manager (CKPT) computest the sql memory target and then the global memory bound, as far as I understand I can only "play" on the pga_aggregate_target value in order to increase the size of our PGAs (I exclude to play with hidden parameters).
- Joze Senegacnik: Advanced Management of working areas in Oracle 9i/10g : http://tonguc.yilmaz.googlepages.com/JozeSenegacnik-PGAMemoryManagementvO.zip
- Dageville Benoit and Zait Mohamed: SQL memory management in oracle 9i
Here different information that could be usefull:
OS: solaris 10 (db running in a non global zone)
Arch: 64-bit sparcv9 kernel modules
Physical memory: 32 Gb (being shared between all non global zones)
Oracle version: 9.2.0.5 32bits
Values of init parameters and hidden parameters that could be relevant:
[init parameters|http://pastebin.com/m40340cf4]
[hidden parameters|http://pastebin.com/m50d74c53]
Maybe useful queries:
over work areas views, I use the following script:
[wa_analysis.sql|http://pastebin.com/d606ebd9b]
and the result of it:
[result of script wa_analysis.sql|http://pastebin.com/m5f49a2e5]Joze Senegacnik wrote:
- either your sessions are using a lot of memory for storing variables like pl/sql arrays which is subtracted from automatic management: PGA_AGGREGATE_TARGET - (aggregated persistent area + a part of the run time area of all server processes)
- you are hitting a bug
- or maybe something elseI am really happy you come to this conclusion too, they are the same we made with my team and we have submitting to Oracle support via metalink SR 3-1216060641, we were asking if we hit the following bug (in note 1) or we leak about pl/sql or java... or else indeed,
note 1: PGA_AGGREGATE_TARGET Assigned Memory Is Left Unconsumed When Set High [ID 844542.1]
Joze Senegacnik wrote:
I would like to know:
1.) what were the values for global memory bound and autotarget immediately (or in short time) after the database restart or when you have increased them Just after the restart of the database and just after the change of P_A_T, we query v$pgastat immediately after and the value of global memory bound and auto target were equal to 0 byte,
2.) If you are able to change value of PGA_AGGREGATE_TARGET (P_A_T) to 10GB what happens with global memory bound and auto traget. They should be positive at least for a short time. As this is a dynamic parameter you can change it for a short time, run queries and set it back.We plan to do this tonight, we have an "heavy" ITIL change management procedures that allow us to make changes approved by change manager and only during night maintenance window on production system, I come back to you tomorrow. But we have been increasing from 1,7Gb to 4Gb to 6Gb, each time I have been querying v$sgastat in the next 2 mins and global memory bound and auto target were equal to 0 byte.
3.) Have you checked on the OS level how much memory are using server processes - do these numbers come along with what Oracle says. Not during problematic activities, meaning active work areas performing HASH-JOIN, SORT... operators,
unfortunately it is a production system, even if he performs poorly, we are not allowed to try or retry the poor queries, but if it comes again I'll do it,
during low activities, here the results paste with the scripts I used:
[pga processes info in oracle|http://pastebin.com/f2e540062]
I spooled the result rows of this previous script in /var/tmp/pga_processes.log then I loop over all processes pid and display pmap output anon info like this:
h5. cat /var/tmp/pga_processes.log | awk -F' ' '{print $5}' | xargs -n 1 -i pmap -x {}| grep -v 'Addres' |egrep 'Kb' 2>&1 > /var/tmp/pga_processes_os.log
then I merge line by line the two files with unix paste command, here the results:
[os and oracle pga informations|http://pastebin.com/f4135c8a6]
4.) How many server processes are running on you system in average/max and are you using just dedicated processes or also shared?in average 250, we are only using dedicated processes,
5.) At time of low activity is the global memory bound still 0 or becomes > 0. I have been querying every 15 min during more than 24 hours low activities, it still stay to 0,
5.) Are you experiencing paging/swapping on OS level?No, here orca figures for details:
[free memory|http://img509.imageshack.us/img509/5897/ohuron1asd2gauge1024xfr.png]
swap
[pagein pageout|http://img121.imageshack.us/img121/6946/ohuron1asd2gaugepginper.png]
[memory usage|http://img19.imageshack.us/img19/2213/ohuron1asd2gaugeppkerne.png]
6.) Please post the result of: select * from X$QESMMSGA ;during low activities, [results X$QESMMSGA|http://pastebin.com/f61df7093]
While you will be answering to my questions I'll try to figure out what we can do to properly diagnose the problem. As you are on 9i it is a little bit harder.I am really kind of your help, as we say in my country, "if you need tow arms one day to carry something, call me."
--Jeremy Baumont -
Memory Manager counters, Please share your coments on below counters?
Hi Folks,
I configured below counters, Could you please coment on below counters, I felt that unused memory is 26 GB, There is any abnormal.
Memory Manager\Free Memory :2 GB
Memory Manager\Maximum Workspace Memory : 38 Gb
Memory Manager\Granted Workspace Memory : 3 MB
Memory Manager\Reserved Server Memory : 3 MB
Memory Manager\Stolen Server Memory : 6 GB
Memory Manager\Target Server Memory : 64 GB
Thanks in advance.Vijay,
What comments you want actually I am not sure, What you want to achieve or diagnose with these counters ?. Please be clear I have always requested you to put as much information in Question as possible
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Memory management while implementing JNI interfaces.
The assumption here is that java code calls a C function from a
shared library in Linux.
How should memory management be done in the native code called
from Java using a JNI interface?
What role does the Garbage Collector play in handling memory that
was allocated in the native C code?
If I allocate memory using malloc() in the C program, should I
explicitly free it? Will the GC take care of it?
Does anyone have an idea of how memory management be done while
implementing a JNI interface?
Thanks,
NisahntNishantMungse wrote:
The assumption here is that java code calls a C function from a
shared library in Linux.
How should memory management be done in the native code called
from Java using a JNI interface?
C: Alloc something giving a pointer
C: Return pointer to Java as a long.
C: As needed methods take a long parameter and cast to correct pointer.
C: Provide a destroy() method that takes a long, casts to correct pointer and deallocates correctly.
Java: class keeps long
Java: As needed JNI methods are passed long
Java: Provide a destroy() method which passes long to C destroy() method if long is not zero. After call set long to zero.
Java: Optional: Add a finalizer. It calls destroy()
Last step is optional if your programming environment is strict AND the usage of the class has a restricted scope.
The above assumes that you are going to use the memory in a 'normal' way. For instance there allocations are relatively small, and exist for short amounts of time. If that isn't true then you might need to tune the usage in much the same way that you might if you had a Java class that consumed a large amount of memory.
What role does the Garbage Collector play in handling memory that
was allocated in the native C code?
It doesn't.
At some point it can interfere with the heap though.
If I allocate memory using malloc() in the C program, should I
explicitly free it? Will the GC take care of it?
You must explicitly free it.
Does anyone have an idea of how memory management be done while
implementing a JNI interface?See above. -
Sort Area Size in Automatic memory management
Hello All
I am aware that the AREASIZE is ignored of the PGA_AGGREGATE_TARGET is set.
So how is it possible that if we incrase the SORT_AREA_SIZE, the performance improves?
does this necessarily mean that the PGA_AGGREGATE_TARGET was not set to a proper value that it instead used the SORT_AREA_SIZE instead?
thanksHi,
If you have set workarea_size_policy=auto then under the automatic PGA memory management mode, sizing of work
areas for all sessions becomes automatic and the AREASIZE parameters are
ignored by all sessions running in that mode.
In auto mode if you change any AREASIZE parameters will be ignored.
If you want to manually handle ARASIZE then turnoff the Automatic pga memory by setting workarea_size_policy=MANUAL and then your changes to parameter will take effect but it's advisable to set pga to automatic.
To check whether your pga is set proper or not check v$pga_target_advice view
SELECT round(PGA_TARGET_FOR_ESTIMATE/1024/1024) target_mb,
ESTD_PGA_CACHE_HIT_PERCENTAGE cache_hit_perc,
ESTD_OVERALLOC_COUNT
FROM V$PGA_TARGET_ADVICE;
This will give you how your pga is set.
chirag -
I have a process that can benefit from lots of physical memory. I'm having trouble figuring out how to manage it.
Essentially, the process needs pages of memory to store data temporarily. When the systems hits some kind of memory limit, then it can flush all data to disk and start over. More memory significantly improves performance because flushes are less frequent.
The problem arises because I can't distinguish between physical and virtual memory, and I can't tell how much memory is available to me. When I grab too much memory, the whole process grinds to a halt because of swapping, or generates Out of Memory errors. Too little and performance is sub-optimal. Further complicating the situation is the fact that other threads are running in the same JVM consuming memory, and that we distribute this app into unknown environments with unknown amounts of memory.
So here are my thoughts so far:
Strategy 1:
Just allocate pages as needed until an OutOfMemoryError occurs. Trap it, flush, and continue. This works, but the app grinds to a halt once the system starts allocating virtual memory.
Strategy 2:
Allocate pages as needed until java.lang.getRuntime().freeMemory() drops below a threshhold value. This doesn't work because freeMemory() doesn't report the amount of free system memory, only the amount of free memory in the pool currently allocated from the operating system. On startup it could show as little as 2mb free on a 512mb machine.
Strategy 3:
Allocate pages up to a fixed amount, like 64mb. This fails when less than 64mb is free, and fails to take advantage of other memory that may be available. We could make this amount user-configurable, but most of our users aren't going to be capable of handling that.
Strategy 4:
Do something with soft or weak references. This has potential, but I can't figure out how to get notified when a reference is about to be garbage collected.
Anyone have better ideas?To summarize: you would like to manage memory; real memory is faster than virtual memory; you can't tell how much real memory is available to you.
Let's take the middle statement: when you start using virtual memory, your application slows down? If that's the case, maybe you could have it time itself and when it starts slowing down, assume that's the cause. You'd have to have a standard bit of code that you could monitor for slowdowns. Of course that could be fooled if another task started eating the CPU, by for example doing a search-and-replace in a 100-MB document in memory. But likely you would be running this task on a dedicated system, or at least one without large competing tasks.
I think you're in uncharted territory here. I haven't seen any articles on this kind of memory management in Java. My suggestion would be to provide tuning parameters (such as maximum memory to use) and allow your users to adjust them based on actual conditions, instead of trying to have your application try to optimize itself. As you observe, your app doesn't have nearly enough information to do that. -
Large SGA On Linux and Automatic Shared Memory Management problem
Hello
I use Oracle10gR2 in linux 32bit and I use http://www.oracle-base.com/articles/linux/LargeSGAOnLinux.php manual
for larger SGA it works fine but when I set sga_target parameter for using Automatic Shared Memory Management
I recieve this error
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00824: cannot set sga_target due to existing internal settings, see alert
log for more information
and in alert log it has been wrote
Cannot set sga_target with db_block_buffers set
my question is when using db_block_buffers can't use Automatic Shared Memory Management ?
Is any solution for using both Large SGA and Automatic Shared Memory Management ?
thanks
Edited by: TakhteJamshid on Feb 14, 2009 3:39 AMTakhteJamshid wrote:
Do it means that when we use large SGA using Automatic Shared Memory Management is impossible ?Yes its true. An attempt to do so will result inthis,
>
ORA-00825: cannot set DB_BLOCK_BUFFERS if SGA_TARGET or MEMORY_TARGET is set
Cause: SGA_TARGET or MEMORY_TARGET set with DB_BLOCK_BUFFERS set.
Action: Do not set SGA_TARGET, MEMORY_TARGET or use new cache parameters, and do not use DB_BLOCK_BUFFERS which is an old cache parameter.>
HTH
Aman....
Maybe you are looking for
-
These i7 4770K Temperatures And Voltage Values Are Normal With Z87-G43?
Hello everyone, I recently upgraded my good old Core2Duo rig and bought a new cpu-motherboard-ram trio. My new specs is as follows: - i7 4770K @ stock speed + Coolermaster 212Evo cpu cooler - MSI z87 g43 motherboard - 2x4 GB G-Skill Ripjaws X 1866Mhz
-
When im taking a picture directly from home screen after im finished the application remain open. This app is ussing the location and if im not closing like other applications by doble click the home button the batery will finished fast. On the other
-
As soon as I shifted to iCloud I could not use my mail on my computer. It put all accounts offline. What's up
-
FM HR_READ_INFOTYPE to read records from 3 infotypes
Hi all, I am writing a report using PNP as my LDB. So there is a selection screen which can takes in the pernr and the dates as begda and endda. Based on this scenario, how do you normally use this FM 'HR_READ_INFOTYPE' to all the records from let sa
-
Hi, Do we have Emailing concept in Oracle Procedures in 9i? Like I've to send an email once the procedure completes its run. Can anyone help me out in this? Also please let me know what package has this emailing concepts. Ram.