Recommended parameter for memory management
Hi, i have a SAP ECC 5 installed on a server iseries with db2 as dbms, in
st02 there are some swaps so i would like to optimize the parameters
regarding the memory management. Is there which can give me the right suggestion?
Bye
Luca
Hi Luca,
if you do see some swaps, please increase these values - this is application dependent (and has nothing to do with DB2).
As you are on 64 bit, you can increase without any problem ...
Regards
Volker Gueldenpfennig, consolut.gmbh
http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de
Similar Messages
-
Recommendations needed for document management app
I want to be able to mail text files in various formats (*.pdf, *.html, *.docx etc) to my iPad, where the
files will then be displayed by title. I also want to add a couple of sentences of annotation to each item.
I should then be able to display any given item with a click or tap.
I can mail files to my ibook app right now OK but I can't see any way to annotate an ibook entry and I
really need that.
To review: Imagine that I am working from my desktop. I see a file I want to save to my iPad. I
download it to my PC and mail it as an attachment to my icloud account. I go to Mail in my iPad
and tap the 'Move' icon. The 'Move' icon gives me the opportunity to save it to the document management app,
whatever it is called. I then go to that app and annotate the saved file. The annotation appears next to
the file title. When I tap the entry the file displays.
I assume the right name for an app like this is text management or file management or document management.
What does this???
Thanks,
Fred HapgoodI do a lot of counselling about a very complicated and dynamic subject. Right now I have to lug 20-30 pounds of reference texts from point A to point B to point C. I bought the iPad Air in hopes I could store this material in the tablet. But the reference material is complicated -- I guess I said that -- and I need lots of words to clarify what the subject is of each pamphlet and brochure, etc, before I can clearly differentiate one from the other. Imagine I have documents A, B and C. In the best of all worlds I would see a metadata display that looked like this:
File A -- Comments
File B -- Comments
File C -- Comments
Taking the filenames and the comments together I decide I need to look at C. I tap that and it displays.
That is the best of worlds. But I see GR allows me to rewrite filenames and maybe doing that will get me to where I want to go.
It does seem strange that with all document-handling packages out there there isn't one that allows comments in metadata. Oh well.
Thanks again,
Fred -
Recommended landscape for Solution Manager
Hello Evereybody,
We are planning to use almost all the Fuctionalities from SOLMAN like Change management,Operation Management,System Monitoring etc,
We are planning to go with SOLMAN DEV,QUALITY,PROD.
Could you please Add your comment in having 3 Solman systems in one Landscape?And how to manage?
Kindly provide me the list of clients who had Implemented 3 systems for Solman and howare they optimally using those 3 systems?
Thanks in Advance!!!!!!!
Pradeep.Hi
For landscape you can check the FAQ section on Service Marketplace ->solution Manager under application lifecycle management
https://websmp207.sap-ag.de/support
To run SAP Solution Manager for minimal operations (i.e. Maintenance Optimizer, EarlyWatchAlert) a one-system landscape is sufficient. For mission-critical operations (Change Request Management, System Monitoring) the SAP Solution Manager is best run on a 3-system landscape (DEV, QAS, PROD).
Hope it helps.
Regards
Prakhar -
Parameter configuration of memory management
Hi all,
I receive from SAP a Analysis Check that suggest me to modify some value parameter of Memory management.
SAP suggest me:
ztta/roll_area: 3000329(current value) -> 6500000(Recommended value)
rdisp/PG_SHM: 8192(current value) -> 16384(Recommended value)
ztta/short_area: 3200000(current value) -> 4000000(Recommended value)
My request is: Can I modify these parameters with value suggest by SAP in my environment?
Is there any documentation which explain some info?
This is the information of the productive system:
machinfo
CPU info:
12 Intel(R) Itanium 2 9000 series processors (1.6 GHz, 24 MB)
533 MT/s bus, CPU version C2
18 logical processors
Memory: 65339 MB (63.81 GB)
Firmware info:
Firmware revision: 7.44
FP SWA driver revision: 1.18
IPMI is supported on this system.
Invalid combination of manageability firmware has been installed on this system.
Unable to provide accurate version information about manageability firmware
Platform info:
Model: "ia64 hp superdome server SD64B"
Machine ID number: xxxxxx
Machine serial number: xxxxxx
OS info:
Nodename: hostaddress
Release: HP-UX B.11.31
Version: U (unlimited-user license)
Machine: ia64
ID Number: xxxx
vmunix releaseversion:
@(#) $Revision: vmunix: B.11.31_LR FLAVOR=perfHi,
I am sure that SAP must have recommanded these values after analysing your system. So you can anytime go for these values and what is good is....if you face any problem in future with these sets of values suggested by SAP, you can anytime escalaye back to SAP.
Please do not forget to take the backup of your Instance profile before making any change. Always adjust the values for pool 10 and pool 40 othwerwise your instance may not come up.
For more details about these parameters, please refer to below link.
http://help.sap.com/saphelp_nw04/helpdata/en/02/962acd538111d1891b0000e8322f96/frameset.htm
With Regards,
Saurabh -
Default storage for locally managed tablespaces
the documentation says you cannot have a default storage parameter for locally managed tablespaces. Does this mean that we cannot specify
INITIAL
NEXT
PCTINCREASE
MINEXTENTS
MAXEXTENTS for such tablespaces, or is there another way we can, without using default storage?
thanksI amnot sure where you read that part that the default storage clause can't be given.Please see here,
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#sthref1149
From the doc,
Creating a Locally Managed Tablespace
If the CREATE TABLESPACE statement includes a DEFAULT storage clause, then the database considers the following:
If you specified the MINIMUM EXTENT clause, the database evaluates whether the values of MINIMUM EXTENT, INITIAL, and NEXT are equal and the value of PCTINCREASE is 0. If so, the database creates a locally managed uniform tablespace with extent size = INITIAL. If the MINIMUM EXTENT, INITIAL, and NEXT parameters are not equal, or if PCTINCREASE is not 0, the database ignores any extent storage parameters you may specify and creates a locally managed, autoallocated tablespace.
If you did not specify MINIMUM EXTENT clause, the database evaluates only whether the storage values of INITIAL and NEXT are equal and PCTINCREASE is 0. If so, the tablespace is locally managed and uniform. Otherwise, the tablespace is locally managed and autoallocated.
HTH
Aman.... -
Anyone use nio-memory-manager ?? what's it good for?
Can someone give me an example of when the nio-memory-manager should be used?
Thanks,
AndrewIf I remember the outcome of my experiments with NIO right the situation is as follows:
1. Allocating/releasing huge shared memory blocks over and over can lead to OS/JVM issues. To avoid this I allocated the max size I wanted from the start (this is an option when configuring "off-heap" storage I believe). When doing it this way I had no reliability issues with the NIO memory manager in my tests.
2. Tangosol/Oracle used to claim that the off-heap (NIO memory manager) result in worse performance than on-heap - I could not see any clear indication of this but this may be application dependent. For our app the reduced number of JVM:s per server (reducing network communication, number of threads, risk of any JVM performing GC at a given time etc etc) seemed to more than offset the allegedly slower memory manager resulting in MUCH BETTER performance! A lot of queries etc anyhow (at least for us) mainly work against indexes that always are stored "on-heap"...
3. There is a limitation to 2Gb per NIO block (at least in 32-bit JVM:s not sure about 64:bit - never seen any point in using them since larger heaps than 2Gb seldom work well anyhow and each pointer consumes double the space in heap and CPU-caches) but this is for each CACHE and separate for PRIMARY and BACKUP I believe! So my understanding is that if you (using 64-bit OS) for instance have two (equally big) caches you could allocate max 2 * 2 * 2 = 8Gb of off-heap memory for folding data per JVM (without ANY impact on GC-pauses!) and in addition to that use as much heap as you can get away with (given GC-pause times) for holding the indexes to that data. This would makes a huge difference in JVM count!- for example we today have to run like 10+ JVM:s per server using "on-heap" while we using "off-heap" storage probably could get that down to one or two JVM:s per server!
4. There may be both OS and JVM parameter that you need to set (depending on OS and JVM used!) in order to allocate large amounts of shared memory using NIO (the default is rather small).
As for the question about de-allocation I never saw any sign of memory leaks with the NIO memory manager (i.e. space previously occupied by deleted objects were reused for new objects) but as I mentioned above you better allocating the max size NIO memory block you intend to use up-front and that memory will then remain allocated for this use so if your amount of cache data vary and you would like to use memory for other purposes (like heap!) at some point you may be better of sticking with "on-heap" that is more flexible in that respect.
As I previously mentioned off-heap is today (until Oracle fixes the improvement request!) really only an option if you do not plan to use "overflow protection" or your objects are fixed size :-(
And if you are interested in using servers with a lot of memory and would like to use "off-heap" please talk to your Oracle sales rep about it! If enough people do that it may allow the Coherence developers to assign more time for making "off-heap" storage better! With this feature in place Coherence will be even more of a "killer application" than it already is!
Best Regards
Magnus -
Modifying Memory Optimization parameter for BPEL process in SOA 11g
Hello
I have turned on memory optimization parameter for my BPEL process in the composite.xml (11g)
this is what I have in composite.xml:
<property name="bpel.config.inMemoryOptimization">false</property>
How do we modify this parameter in the EM console at runtime? I changed this property to "true" using the System MBean browser, but it wasn't taking effect. I thought the SOA server must be restarted (similar to what we used to do in 10g). But when I restart the SOA server, the parameter goes back to whatever the value was in the composite.xml ignoring the change I made in the System MBean browser
Please share your thoughts.
Thanks in advance.
RajaDeploying a newer version is not an option, as the endpoints could change (not sure if it would in 11g, but in 10g it does) and also, our service consumers will be pointing to the older version.As mentioned above, if clients are using URL without version then call will be forwarded to default version of composite internally. No manual tweaking required for this. Just make sure that while deploying the new version you are marking it as default.
Besides, we report on service metrics and having multiple versions just complicates things.Not at all. If you are not using versioning feature, you are really under utilizing the Oracle SOA 11g. Remember that metrics can be collected for a single composite with same effort, irrespective of the number of composite versions deployed. Only few product tables refer the version while storing composite name and rest all use only the composite name without version. I do not know how you are collecting service metrics but we use DB jobs for same it works perfectly with any number of composites having multiple versions deployed.
The idea is to do some debugging and collect audit trail in case there is a production issue by disabling inMemoryOptimization parameter. This is a live production environment and deploying whenever we want is not even an option for us, unfortunately. Why not debug by increasing log level. Diagnostic logs are the best option to debug an issue even in production. For getting audit trail you may re-produce the issue in lower environments. I think no organization will allow re-deployments just for debugging some issue in production until and unless it is too critical issue to handle.
Does this not supported in 11g? if it isn't, it does seem like a bug to me. You may always go ahead and raise a case with support.
Regards,
Anuj -
Hi gurus
In resource based throttling, what's the recommended setting for "Process memory usage" ("process virtual" in the resource based throttling tab of the UI) for a 64-bit host
on a 64-bit Windows OS?
According to MS (http://msdn.microsoft.com/en-us/library/ee308808(v=bts.10).aspx):
"By default, the
Process memory usage throttling threshold is set to 25. If this value is exceeded and the BizTalk process memory usage is more than 300 MB, a throttling condition may occur. On a 32-bit
server, you can increase the Process memory usage value to 50. On a 64-bit server, you can increase this value to 100. This allows for more memory consumption by the BizTalk process before throttling
occurs."
Does this mean that 100 is the recommended setting for a 64-bit host on a 64-bit Windows?
Thanks
Michael Brandt LassenHi Michael,
Recommended setting is the default setting which is 25 .dot.
If your situation is abnormal and you see message delivery throttling state to “4” when the following performance counters are high or if you expect any of you integration
process could have impact on following counters, then you can consider the suggestion by Microsoft. Don’t change the default setting.
High process memory
Process memory usage (MB)
Process memory usage threshold (MB)
You can see these counters under “BizTalk:MessageAgent”
You can gauge these performance counter and its maximum values if have done any regression/performance testing in your test servers. If you have seen these counters having
high values and causing throttling, then you can update the Process memory usage.
Or unexpectedly you’re process high throughput messages in production which is causing these counters to go high and cause throttling, then up can update the Process memory
usage.
The above two cases where I know my expected process usage (by doing performance testing) or suddenly my production server processing has gone high due to unexpected business
hike (or any reasons) which caused throttling, then do changes to default throttling setting.
Just changing the default setting without actual reason could have adverse effect where you end up allocating
more processing capacities but the actual message processing message usage ever is low means you end up investing in underutilised resources.
Regards,
M.R.Ashwin Prabhu
If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply. -
SAP Parameter Recommended note for SAP Content Server 6.40
Hi..
we are planning to installation of SAP Content Server 6.40 with SAP MAXDB 7.6 on Solaris, Kindly provide what are the recommendation need to perform before installation of Production system.
Like IO Buffer Cache[MB] , Number of Sessions or any SAP Parameter Recommended note for SAP Content Server 6.40 with SAP MAXDB 7.6.
Regards,
PanuHello,
Did you already check the preparations and parameters in the SAP CS 6.40 installation guide ?
You can find topics in the Installation Guide like :
- Planning and Sizing of the Database Instance
- Preparations
For more in depth tuning also have a look at the following document :
Operational Guide - SAP Content Server
This document contains the complete list of Content Server parameters.
Success.
Wim -
Is there an app that runs in the background and keeps optimizing the memory? I've seen alot of memory manager apps that when you open them, they'll optimize the memory, but it would be great if there was an app that did that but also ran in the background. If anyone knows of anything or if it might be in a future release, please let me know. Thanks.
JasonI could be wrong, but I don't think such an app is even possible given the restrictions in the software development kit. For that matter, I have grave doubts about those apps that claim useful memory management even when they're the running app; I can't see how they would accomplish anything at all beneficial given how iOS sandboxes apps.
Regards. -
About index memory parameter for Oracle text indexes
Hi Experts,
I am on Oracle 11.2.0.3 on Linux and have implemented Oracle Text. I am not an expert in this subject and need help about one issue. I created Oracle Text indexes with default setting. However in an oracle white paper I read that the default setting may not be right. Here is the excerpt from the white paper by Roger Ford:
URL:http://www.oracle.com/technetwork/database/enterprise-edition/index-maintenance-089308.html
"(Part of this white paper below....)
Index Memory As mentioned above, cached $I entries are flushed to disk each time the indexing memory is exhausted. The default index memory at installation is a mere 12MB, which is very low. Users can specify up to 50MB at index creation time, but this is still pretty low.
This would be done by a CREATE INDEX statement something like:
CREATE INDEX myindex ON mytable(mycol) INDEXTYPE IS ctxsys.context PARAMETERS ('index memory 50M');
Allow index memory settings above 50MB, the CTXSYS user must first increase the value of the MAX_INDEX_MEMORY parameter, like this:
begin ctx_adm.set_parameter('max_index_memory', '500M'); end;
The setting for index memory should never be so high as to cause paging, as this will have a serious effect on indexing speed. On smaller dedicated systems, it is sometimes advantageous to temporarily decrease the amount of memory consumed by the Oracle SGA (for example by decreasing DB_CACHE_SIZE and/or SHARED_POOL_SIZE) during the index creation process. Once the index has been created, the SGA size can be increased again to improve query performance."
(End here from the white paper excerpt)
My question is:
1) To apply this procedure (ctx_adm.set_parameter) required me to login as CTXSYS user. Is that right? or can it be avoided and be done from the application schema? This user CTXSYS is locked by default and I had to unlock it. Is that ok to do in production?
2) What is the value that I should use for the max_index_memory should it be 500 mb - my SGA is 2 GB in Dev/ QA and 3GB in production. Also in the index creation what is the value I should set for index memory parameter - I had left that at default but how should I change now? Should it be 50MB as shown in example above?
3) The white paper also refer to rebuilding an index at some interval like once in a month: ALTER INDEX DR$index_name$X REBUILD ONLINE;
--Is this correct advice? i would like to ask the experts once before doing that. We are on Oracle 11g and the white paper was written in 2003.
Basically while I read the paper, I am still not very clear on several aspects and need help to understand this.
Thanks,
OrauserNPerhaps it's time I updated that paper
1. To change max_index_memory you must be a DBA user OR ctxsys. As you say, the ctxsys account is locked by default. It's usually easiest to log in as a DBA and run something like
exec ctxsys.ctx_adm.set_parameter('MAX_INDEX_MEMORY', '10G')
2. Index memory is allocated from PGA memory, not SGA memory. So the size of SGA is not relevant. If you use too high a setting your index build may fail with an error saying you have exceeded PGA_AGGREGATE_LIMIT. Of course, you can increase that parameter if necessary. Also be aware that when indexing in parallel, each parallel process will allocated up to the index memory setting.
What should it be set to? It's really a "safety" setting to prevent users grabbing too much machine memory when creating indexes. If you don't have ad-hoc users, then just set it as high as you need. In 10.1 it was limited to just under 500M, in 10.2 you can set it to any value.
The actual amount of memory used is not governed by this parameter, but by the MEMORY setting in the parameters clause of the CREATE INDEX statement. eg:
create index fooindex on foo(bar) indextype is ctxsys.context parameters ('memory 1G')
What's a good number to use for memory? Somewhere in the region of 100M to 200M is usually good.
3. No - that's out of date. To optimize your index use CTX_DDL.OPTIMIZE_INDEX. You can do that in FULL mode daily or weekly, and REBUILD mode perhaps once a month. -
What are the best memory management actions for the ipad mini retina.
I purchased an ipad mini retina 16 GB. I am concerned with all the capabilities of this device that 16 GB will not be sufficient· So I am asking for suggestions regarding effective memory management techniques·
Thanks. Good common sense advice which I will apply as as much as possible·I dont plan on stroring a lot of videos but with IOS 7 apps are so good that it will be difficult to limit the numbers used. Also I will be using it for photo display but I will move photos regularly to my pc for long term storage
-
Because of the discussion present in the following thread, I did few experiments and would you like to share with you. Hope it is a useful information for you guys.
Re: db_name & memory allocation
OS: Win XP SP2 or Windows 2000 <b>(32-bit)</b>
Oracle Version: Oracle 10gR2
System RAM: 1G
<b>Attempting to create the database CCC by specifying db_name parameter only</b>
C:\oracle\product\10.2.0\db_1\database>copy con initCCC.ora
db_name=CCC
^Z
1 file(s) copied.
<b> Starting the service </b>
C:\oracle\product\10.2.0\db_1\database>oradim -new -sid CCC -startmode m
Instance created.
C:\oracle\product\10.2.0\db_1\database>set oracle_sid=CCC
C:\oracle\product\10.2.0\db_1\database>sqlplus "/as sysdba"
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Apr 6 07:50:06 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL>
SQL>
SQL> startup nomount
ORACLE instance started.
Total System Global Area <b>113246208</b> bytes -- Default SGA size
Fixed Size 1247588 bytes
Variable Size 58721948 bytes
Database Buffers 50331648 bytes
Redo Buffers 2945024 bytes
<b> On seeing dynamic components size, shared pool got
33m, large pool got 0m, java pool got 25m and buffer cache
got 50m. Please check the references for these default
values. </b>
SQL> select component,current_size from v$sga_dynamic_components;
COMPONENT CURRENT_SIZE
shared pool 33554432
large pool 0
java pool 25165824
streams pool 0
DEFAULT buffer cache 50331648
KEEP buffer cache 0
RECYCLE buffer cache 0
DEFAULT 2K buffer cache 0
DEFAULT 4K buffer cache 0
DEFAULT 8K buffer cache 0
DEFAULT 16K buffer cache 0
COMPONENT CURRENT_SIZE
DEFAULT 32K buffer cache 0
ASM Buffer Cache 0
13 rows selected.
<b>Total=109051904 </b>
SQL>
create database
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced
<b>Errors in alert log file</b>
ORA-00604: error occurred at recursive SQL level 1
<b>ORA-04031: unable to allocate 40 bytes of shared memory ("shared pool","create unique index
i_proxy_...","sql area","kksol : kksnsg")
</b>
Error 1519 happened during db open, shutting down database
USER: terminating instance due to error 1519
<b>It was thought that shared_pool memory area is not
sufficient. Then thought to know how oracle will behave on
setting sga_target parameter.</b>
<b>initCCC.ora</b>
db_name=CCC
sga_target=113m
SQL> startup nomount
ORACLE instance started.
Total System Global Area <b>121634816 bytes -- Got around 121m </b>
Fixed Size 1247636 bytes
Variable Size 54527596 bytes
Database Buffers 62914560 bytes
Redo Buffers 2945024 bytes
<b>Now create database statement got success here.
Instance got few extra Mbs of memory and oracle can create
the database.</b>
<b>Then it was thought that, why to give 121Mb even. Let exactly 113246208 bytes be devoted to the instance.</b>
initCCC.ora
db_name=CCC
sga_target=113246208 -- providing exact number of bytes as obtained in default case
SQL> startup nomount
ORACLE instance started.
Total System Global Area 113246208 bytes
Fixed Size 1247588 bytes
Variable Size 54527644 bytes
Database Buffers 54525952 bytes
Redo Buffers 2945024 bytes
SQL>
<b> Now create database statement got success here as
well. Because of sga_target, automatic shared memory
management enabled and instance was taking care of buffer
cache, shared pool, large pool, java pool and
streams_pool. In this case shared pool got 46m (greater
than default case value). Java pool and Large pool got 4
mb each and buffer cache got 54m. </b>
SQL> select component,current_size
2 from v$sga_dynamic_components;
COMPONENT CURRENT_SIZE
shared pool 46137344
large pool 4194304
java pool 4194304
streams pool 0
DEFAULT buffer cache 54525952
KEEP buffer cache 0
RECYCLE buffer cache 0
DEFAULT 2K buffer cache 0
DEFAULT 4K buffer cache 0
DEFAULT 8K buffer cache 0
DEFAULT 16K buffer cache 0
COMPONENT CURRENT_SIZE
DEFAULT 32K buffer cache 0
ASM Buffer Cache 0
13 rows selected.
<b>Total=109051904 </b>
SQL>
By providing 113246208 bytes (as in default case) of
memory to SGA by setting sga_target value, Oracle gave
extra memory to shared_pool and buffer cache than in case
of default, thus helping create database statement to get
pass. Oracle always recommends to use automatic shared
memory management by setting sga_target parameter value.
Hope this experiment provides few clues about automatic
shared memory management feature of Oracle 10g. This case
was conducted on 32-bit Oracle. It is quite possible that
create database statement might get success in 64-bit
Platform as by default Oracle will provide 84 Mb to Shared
pool. But to confirm, it has to be experimented.
References:
Shared pool
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams196.htm#sthref804
Large Pool
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams090.htm#sthref377
Java Pool
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams087.htm#sthref364
Buffer Cache
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams043.htm#sthref185Hi,
Good Work Mohammed !!!
Regards
Taj -
Questions about db_keep_cache_size and Automatic Shared Memory Management
Hello all,
I'm coming upon a server that I'm needing to pin a table and some objects in, per the recommendations of an application support call.
Looking at the database, which is a 5 node RAC cluster (11gr2), I'm looking to see how things are laid out:
SQL> select name, value, value/1024/1024 value_MB from v$parameter
2 where name in ('db_cache_size','db_keep_cache_size','db_recycle_cache_size','shared_pool_size','sga_max_size');
NAME VALUE VALUE_MB
sga_max_size 1694498816 1616
shared_pool_size 0 0
db_cache_size 0 0
db_keep_cache_size 0 0
db_recycle_cache_siz 0 0
e
Looking at granularity level:
SQL> select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';
GRANULE_SIZE/VALUE
2048
Then....I looked, and I thought this instance was set up with Auto Shared Mem Mgmt....but I see that sga_target size is not set:
SQL> show parameter sga
NAME TYPE VALUE
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1616M
sga_target big integer 0
So, I'm wondering first of all...would it be a good idea to switch to Automatic Shared Memory Management? If so, is this as simple as altering system set sga_target =...? Again, this is on a RAC system, is there a different way to do this than on a single instance?
If that isn't the way to go...let me continue with the table size, etc....
The table I need to pin is:
SQL> select sum (blocks) from all_tables where table_name = 'MYTABLE' and owner = 'MYOWNER';
SUM(BLOCKS)
4858
And block size is:
SQL> show parameter block_size
NAME TYPE VALUE
db_block_size integer 8192
So, the space I'll need in memory for pinning this is:
4858 * 8192 /1024/1024 = 37.95.......which is well below my granularity mark of 2048
So, would this be as easy as setting db_keep_cache_size = 2048 with an alter system call? Do I need to set db_cache_size first? What do I set that to?
Thanks in advance for any suggestions and links to info on this.
cayenne
Edited by: cayenne on Mar 27, 2013 10:14 AM
Edited by: cayenne on Mar 27, 2013 10:15 AMJohnWatson wrote:
This is what you need,alter system set db_keep_cache_size=40M;I do not understand the arithmetic you do here,select granule_size/value from v$sga_dynamic_components, v$parameter where name = 'db_block_size' and component like 'KEEP%';it shows you the number of buffers per granule, which I would not think has any meaning.I'd been looking at some different sites studying this, and what I got from that, was that this granularity gave you the minimum you could set the db_keep_cache_size, that if you tried setting it below this value, it would be bumped up to it, and also, that each bump you gave the keep_cache, would be in increments of the granularity number....?
Thanks,
cayenne -
Blue Screen of Death - Memory Management
I have gotten the blue screen of death twice now. The screen went by really fast but I think it said something about memory, like memory management. I copied the problem details:
Problem signature:
Problem Event Name: BlueScreen
OS Version: 6.1.7601.2.1.0.768.3
Locale ID: 4105
Additional information about the problem:
BCCode: 1a
BCP1: 0000000000041790
BCP2: FFFFFA80022B8DF0
BCP3: 000000000000FFFF
BCP4: 0000000000000000
OS Version: 6_1_7601
Service Pack: 1_0
Product: 768_1
Files that help describe the problem:
C:\Windows\Minidump\071314-34335-01.dmp
C:\Users\Tiffany Jiang\AppData\Local\Temp\WER-53804-0.sysdata.xml
Both times happened randomly and about 2 weeks apart, and the laptop (Toshiba Satellite; it was bought brand new about only 3 weeks ago) still works fine after it restarts, but I don't want it to keep happening. I also don't know what the problem is. I used my previous Windows 7 laptop (HP pavillion g6) the same way for 3 years and it never blue screened even once. I tried going to the Toshiba support website to see if I need to download any new drivers but the website does not seem to be working on my laptop, everytime I go on it it's just lines of letter and numbers.
Help would be much appreciated, thank you!Satellite C55-A5195
Downloads here.
Bug Check 0x19: BAD_POOL_HEADER
Caused By Driver : vsdatant.sys <- True Vector device driver
You have ZoneAlarm on board? If so, uninstall it and see what happens.
==================================================
Dump File : 071314-34335-01.dmp
Crash Time : 7/15/2014 9:58:18 AM
Bug Check String : MEMORY_MANAGEMENT
Bug Check Code : 0x0000001a
Parameter 1 : 00000000`00041790
Parameter 2 : fffffa80`022b8df0
Parameter 3 : 00000000`0000ffff
Parameter 4 : 00000000`00000000
Caused By Driver : ntoskrnl.exe
Caused By Address : ntoskrnl.exe+75bc0
File Description :
Product Name :
Company :
File Version :
Processor : x64
Computer Name :
Full Path : C:\Test\071314-34335-01.dmp
Processors Count : 4
Major Version : 15
Minor Version : 7601
==================================================
==================================================
Dump File : 070514-42837-01.dmp
Crash Time : 7/15/2014 9:58:18 AM
Bug Check String : BAD_POOL_HEADER
Bug Check Code : 0x00000019
Parameter 1 : 00000000`00000020
Parameter 2 : fffffa80`0b390140
Parameter 3 : fffffa80`0b390160
Parameter 4 : 00000000`04020008
Caused By Driver : vsdatant.sys
Caused By Address : vsdatant.sys+47054
File Description :
Product Name :
Company :
File Version :
Processor : x64
Computer Name :
Full Path : C:\Test\070514-42837-01.dmp
Processors Count : 4
Major Version : 15
Minor Version : 7601
==================================================
-Jerry
Maybe you are looking for
-
Why don't responseHeader's works in JSF Portlet
Hi All, I have a JSF portlet where i want to save the contents of my Text area on the click of a button to a word document.(IBM Websphere portal) <h:inputTextarea id="text1" value="#{pc_TextAreaView.textAreaValue}" rows="10" cols="50" /> <h:commandBu
-
we want to create helloworld project addon instalation ,,,,,,,,,,,,, but problem is that file = thisExe.GetManifestResourceStream("Installer." & sAddonName & ".exe" If IO.File.Exists(sSourcePath) Then IO.File.Delete(
-
PO Auto creation through Requisition Workflow
Hi, I am creating a Non-Catalog Request type of requistion in iProcurement. When I submit the requisition for approval and if the requisition is approved the PO is not getting created. Some of the metalink notes said a Contract PO needs to be created
-
Clicking noise, then freezes
Hello everyone, I was wondering if anyone has experienced a strange clicking from the left side of the iMac. It reminds me of the sound that the lid of a "Snapple" bottle makes when you press the top of it in and out. It clicks for a few seconds and
-
Existance of two different domain in a single subnet
Hi..i have a query...Is it possible to create two domains say A.com and B.com in a single subnet(in the same network)?sujit mohanty