Memory_max_target and memory_target on 11.2
Hi gurus,
after an upgrade from DBMS10g to 11.2 our DBA changed manually parameters about memory in the following way:
*.db_cache_size=2113929216
*.hash_area_size=101310720
*.java_pool_size=83886080
*.log_buffer=5097152
*.memory_target=0
*.pga_aggregate_target=838860800
*.sga_max_size=12884901888
*.sga_target=0
*.shared_pool_reserved_size=61331648
*.shared_pool_size=1677721600
Is this the right way to configuree parameters on 11g? or is better to set only the .memory_max_target and memory_target and allow Oracle to manage dynamically the other memory size?
Thanks in advance.
Hi DDF,
It is simple to use SGA_TARGET with SGA_MAX_SIZE (Need to allocate PGA) or Memory_target witn memory_max_target (not need to define PGA).
But make sure your applications are running ok with that setup. Sometimes you may need to define other memory parameters.
Cheers
M
Similar Messages
-
Setting sga_max_size and memory_target
what happens when i set
sga_max_size = 4500
memory_target = 5000M
memory_max_target=5000M
sga_target=0
can / will oracle use more than 4.5GB for sga ?
can pga grow more than 500M ?
startup;
ORACLE instance started.
Total System Global Area 4710043648 bytes
Fixed Size 2234376 bytes
Variable Size 3925870584 bytes
Database Buffers 771751936 bytes
Redo Buffers 10186752 bytuser9198889 wrote:
g777 wrote:
hi
DB and OS version would be nice to see...
look here
http://www.dba-oracle.com/oracle11g/oracle_11g_memory_target_parameter.htm
SGA_MAX_SIZE & SGA_TARGET
linux 5.5
db 11.2.0.2 rac GI 11.2.0.2thanks for the links,
"When using AMM (by setting memory_target, and/or sga_target, the values for the “traditional” pool parameters (db_cache_size, shared_pool_size, &c) are not ignored. Rather, they will specify the minimum size that Oracle will always maintain for each sub-area in the SGA.
so that means PGA wont grow more than 500MB and SGA be set to minimum of 4.5GB?
actually if you carefully read the above comment it doesnt mentiond sga_max_size it only says traditional pools such as db_cache_size etc...
does this also include sga_max_size?
I had a look at the other link to a thread but the question is slightly diffrent, they are discussing relation of sga_target to memory_target, I am intrested to know about sga_nax_size inrelation to memory_target -
Sga_max_size and sga_target values
I have an 11g database on windows with 4GB RAM, I have set the MEMORY_MAX_TARGET and MEMORY_TARGET, what should I do with the pre-existing sga_max_size and sga_target values
memory_target = sga_target + max(pga_aggregate_target, maximum PGA allocated)
MEMORY_MAX_TARGET = sum of the SGA and instance PGA sizes.
For Automatic memory management
set
ALTER SYSTEM SET SGA_TARGET = 0;
ALTER SYSTEM SET PGA_AGGREGATE_TARGET = 0;
Note:
In a text initialization parameter file, if you omit the line for MEMORY_MAX_TARGET and include a value for MEMORY_TARGET, the database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter defaults to zero. After startup, you can then dynamically change MEMORY_TARGET to a nonzero value, provided that it does not exceed the value of MEMORY_MAX_TARGET.
Note:
The preceding steps instruct you to set SGA_TARGET and PGA_AGGREGATE_TARGET to zero so that the sizes of the SGA and instance PGA are tuned up and down as required, without restrictions. You can omit the statements that set these parameter values to zero and leave either or both of the values as positive numbers. In this case, the values act as minimum values for the sizes of the SGA or instance PGA.
Reference -
Is there a limit to how high memory_max_target can be set?
We are running Oracle 11g R2 64-bit on a server with Windows Server 2008 R2 Enterprise 64-bit. The machine has 64GB of RAM and there are four databases setup.
When trying to allocate the RAM for the four databases, we set the memory_max_target and memory_target as follows:
DB1 = 8GB
DB2 = 32GB
DB3 = 4GB
DB4 = 8GB
This leaves 12GB for the OS.
This all worked fine and is set correctly in the databases, but when looking at the task manager, the oracle.exe processes show the memory being used as we set it for three if the four. The one database set to 32GB is only using 16GB for some reason. Do any of you know why this is? Is there a maximum limit that can be allocated to the database?
Edited by: 1009215 on May 31, 2013 7:49 PMThanks Justin. So do you think it would eventually use all 32GB if it was under enough load? It just seems strange because before we went with the defaults during install and it tried allocating 40% of the RAM to all four instances (which obviously didn't work). When we would start shutting down databases we would see the ones still running take up much more RAM, even though they were empty and not doing anything. At the time they would all try to grab 26GB if they could, since that was 40% of 64GB, which would basically leave nothing left for the OS.
After making the changes to target and max target, everything looked correct in the task manager except for the on that is sitting at 16GB instead of 32GB. -
Hellow, my questions concern ORACLE 11g,
First topic
Tell me if i'm right:
if i set server parameters:
memory_max_target to 1.5 GB
memory_target to 1GB
pga_aggregate_target to 100MB
sga_target to 500MB
Q1.That's mean DB will use at least 500MB for all SGA components and at least 100MB for Server Processes?
Q2.And if it's true Oracle will tune SGA szie above sga_target and for PGA above 100MB if necessary?
Second topic:
If I set :
memory_max_target to 1.5 GB
memory_target to 1GB
pga_aggregate_target to 0
sga_target to 0
Q3.That's mean DB wil tune SGA component's size and PGA buffer it self up to 1GB for both elements?
Thanks for replay.From MOS ID 169706.1
Automatic Memory Management
Starting with Oracle Database 11g, the Automatic Memory Management feature requires more shared memory (/dev/shm) and file descriptors. The shared memory should be sized to be at least the greater of MEMORY_MAX_TARGET and MEMORY_TARGET for each Oracle instance on the computer. To determine the amount of shared memory available, enter the following command: # df -k /dev/shm/
Note: MEMORY_MAX_TARGET and MEMORY_TARGET cannot be used when LOCK_SGA is enabled or with huge pages on Linux -
Oracle 11g AMM (Automatic Memory Management)
Hi All,
I have a very powerful server 24 Processors with 6 cores each and 74 GB RAM for my production database. The server will host only one production database. I wanted to use AMM for this database and allocate maximum memory to Oracle by setting memory_target. By default /dev/shm is set 37 GB but I wanted to increase it least 55 GB. I know I can get this changed by my system admin but I wanted to know how much memory should leave for OS?
Please help me on sizing this.
Thanks,
Arun SinghFrom MOS ID 169706.1
Automatic Memory Management
Starting with Oracle Database 11g, the Automatic Memory Management feature requires more shared memory (/dev/shm) and file descriptors. The shared memory should be sized to be at least the greater of MEMORY_MAX_TARGET and MEMORY_TARGET for each Oracle instance on the computer. To determine the amount of shared memory available, enter the following command: # df -k /dev/shm/
Note: MEMORY_MAX_TARGET and MEMORY_TARGET cannot be used when LOCK_SGA is enabled or with huge pages on Linux -
Log file sync during RMAN archive backup
Hi,
I have a small question. I hope someone can answer it.
Our database(cluster) needs to have a response within 0.5 seconds. Most of the time it works, except when the RMAN backup is running.
During the week we run one time a full backup, every weekday one incremental backup, every hour a controlfile backup and every 15 minutes an archival backup.
During a backup reponse time can be much longer then this 0.5 seconds.
Below an typical example of responsetime.
EVENT: log file sync
WAIT_CLASS: Commit
TIME_WAITED: 10,774
It is obvious that it takes very long to get a commit. This is in seconds. As you can see this is long. It is clearly related to the RMAN backup since this kind of responsetime comes up when the backup is running.
I would like to ask why response times are so high, even if I only backup the archivelog files? We didn't have this problem before but suddenly since 2 weeks we have this problem and I can't find the problem.
- We use a 11.2G RAC database on ASM. Redo logs and database files are on the same disks.
- Autobackup of controlfile is off.
- Dataguard: LogXptMode = 'arch'
Greetings,Hi,
Thank you. I am new here and so I was wondering how I can put things into the right category. It is very obvious I am in the wrong one so I thank the people who are still responding.
-Actually the example that I gave is one of the many hundreds a day. The respone times during the archive backup is most of the time between 2 and 11 seconds. When we backup the controlfile with it, it is for sure that these will be the response times.
-The autobackup of the controfile is put off since we already have also a backup of the controlfile every hour. As we have a backup of archivefiles every 15 minutes it is not necessary to also backup the controlfile every 15 minutes, specially if that even causes more delay. Controlfile is a lifeline but if you have properly backupped your archivefiles, a full restore with max 15 minutes of data loss is still possible. We put autobackup off since it is severely in the way of performance at the moment.
As already mentioned for specific applications the DB has to respond in 0,5 seconds. When it doesn’t happen then an entry will be written in a table used by that application. So I can compare the time of failure with the time of something happening. The times from the archivelog backup and the failure match in 95% of the cases. It also show that log file sync at that moment is also part of this performance issue. I actually built a script that I used for myself to determine out of the application what the cause is of the problem;
select ASH.INST_ID INST,
ASH.EVENT EVENT,
ASH.P2TEXT,
ASH.WAIT_CLASS,
DE.OWNER OWNER,
DE.OBJECT_NAME OBJECT_NAME,
DE.OBJECT_TYPE OBJECT_TYPE,
ASH.TIJD,
ASH.TIME_WAITED TIME_WAITED
from (SELECT INST_ID,
EVENT,
CURRENT_OBJ#,
ROUND(TIME_WAITED / 1000000,3) TIME_WAITED,
TO_CHAR(SAMPLE_TIME, 'DD-MON-YYYY HH24:MI:SS') TIJD,
WAIT_CLASS,
P2TEXT
FROM gv$active_session_history
WHERE PROGRAM IN ('yyyyy', 'xxxxx')) ASH,
(SELECT OWNER, OBJECT_NAME, OBJECT_TYPE, OBJECT_ID FROM DBA_OBJECTS) DE
WHERE DE.OBJECT_id = ASH.CURRENT_OBJ#
AND ASH.TIME_WAITED > 2
ORDER BY 8,6
- Our logfiles are 250M and we have 8 groups of 2 members.
- Large pool is not set since we use memory_max_target and memory_target . I know that Oracle maybe doesn’t use memory well with this parameter so it is truly a thing that I should look into.
- I looked for the size of the logbuffer. Actually our logbuffer is 28M which in my opinion is very large so maybe I should put it even smaller. It is very well possible that the logbuffer is causing this problem. Thank you for the tip.
- I will also definitely look into the I/O. Eventhough we work with ASM on raid 10 I don’t think it is wise to put redo logs and datafiles on the same disks. Then again, it is not installed by me. So, you are right, I have to investigate.
Thank you all very much for still responding even if I put this in the totally wrong category.
Greetings, -
Install Oracle Clusterware/RAC Windows x32bit or Windows 64bit ?
Hi,
What is your suggestion to install Oracle Clusterware RAC on Windows.
Why choose Windows 32 bit ?
Why choose Windows 64 bit?
What are the advantages and disadvantages?Hi,
I definitely recommend 64bit Windows, the principal reason is memory addressing limitation.
In my standpoint do not exist benefits to use Windows 32bit with Oracle Clusterware/RAC.
32-bit processors have address registers that are 32-bits wide (can address up to 4GB physical RAM).
You can to use a workaround by enabling the addressing of more than 4GBytes using parameters in Windows (AWE/PAE). But if I want to use more than 4Gbytes, I'll go to Windows 64bit.
The 4GB addressable space per process is divided into system and user space. By default, the system space is 2GB and the private user space is 2GB. Each user process shares the same 2GB System address space.
The services of the Clusterware, ASM and Listener will consume at least 1Gbyte of memory initially.
If you create an instance to to use 1Gbytes and you have 50 connections you will have problems with connections to the database raising the following error.
TNS-00510: Internal limit restriction exceeded
32-bit Windows Error: 8: Exec format error
TNS-00507: Connection closed
32-bit Windows Error: 109: Unknown errorReason: Problems of memory usage. i.e. Out of Memory.
Another point:
With Windows 32-bit using Clusterware/RAC 11.1 you can set Automatic Memory Management(AMM) feature (MEMORY_TARGET) but you cannot use if you create 2 Instances on same Host.
Check this note..
*ORA-27102, OSD-00031 Unable To Extend Memory_max_target And Memory_target Past 2GB [ID 842881.1]*
Oracle Solution: Please install the 64-bit OS on 64-bit server, hence there is no memory limitations
I belive that is reason wich Oracle not support Oracle Grid Infrastructure 11.2 on Windows 32-bit. You cannot use ASM 11.2 on Windows 32-bit.
Definitely do not exist reason for me to to use Windows 32bit to support Oracle Clusterware.
There are lot services involved wich require memory and a big limitation of memory on 32-bit.
At present all latest CPUs support 64bit. Then install Oracle Clusterware/RAC x64, even when you are use less than 4GBytes memory, you only have benefits.
Benefits of Use 64-bit are:
64-bit versions of Windows have implemented up to 16 terabytes of virtual address space
64-bit programs use the 16 terabyte tuning model (8TB user / 8TB kernel)
No performance penalties for large memory
Unlimited user connectivity
64-bit Windows versions allow addressing of large physical memory sizes without using PAE
*Windows Memory Configuration: 32-bit and 64-bit [ID 873752.1]*
This document gives a better understanding of my perspective.
http://levipereira.wordpress.com/2011/02/12/comparison-of-32-bit-and-64-bit-oracle-database-performance-on-the-dell-poweredge-6850-server-with-microsoft-windows-server-2003/
Regards,
Levi Pereira -
MCU Send assignment is not reflected in automatic channel management
I have discovered a problem where the MCU Send assignment, is not reflected in automatic channel management.
Can someone try this for me please, with a session that has for 'Automatic Management of Channel strips' enabled...
Press and hold 'Send' -- then push v-pot 7 for the function 'Send, destination and level' for the v-pots.
If you assign an Aux that has not been used, it will not be created automatically.
Cheers,
MattAutomatic Shared Memory Management Enabled
Total SGA Size (MB) 3840
SGA Component Current Allocation (MB)Check the value of your MEMORY_MAX_TARGET and MEMORY_TARGET setting in 11g these two parameter enable auto shared memory management.
the values of SGA_TARGET and PGA_AGGREGATE_TARGET act as minimum values for the sizes of the SGA or instance PGA if set to non-zero.
SHOW PARAMETER TARGET
http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/memory003.htm#BGBJAHEJ -
ASM support on Win2008-32bit OS
Hi,
Can anyone let me know whether ASM is supported on Win2008-32bit systems ? I am a getting the following error while trying to install an ASM database on Win2008-32bit OS.
>
Oracle Database 11gRelease 2 and later supports Oracle ASM only on 64bit Windows operating systems
>
I checked the certification page and was not able to find details regarding ASM support.
AmithHi Amith,
32-bit processors have address registers that are 32-bits wide (can address up to 4GB physical RAM).
The 4GB addressable space per process is divided into system and user space. By default, the system space is 2GB and the private user space is 2GB. Each user process shares the same 2GB System address space.
I do not recommend to use (AWE/PAE) to increase memory (more than 4GB) address since already exists Oracle and Windows 64bit.
The Oracle Restart must be configured to use ASM.
Memory Requirements for Grid Infrastructure Standalone
Oracle Grid Infrastructure requires at least 1 gigabyte (GB) of RAM for installation.
http://download.oracle.com/docs/cd/E11882_01/install.112/e16773/oraclerestart.htm#CHDCEBFD
Memory Requirements for Database
Physical memory (RAM) 1 GB minimum
http://download.oracle.com/docs/cd/E11882_01/install.112/e16773/reqs.htm#BCFDACBF
Configuring ASM with two Diskgroup you will use least 500M.
Creating a database with 1GB of memory (using memory target) you already will have memory problems.
Using this setup your database with 50 connections already be with out of memory problem.
Remember the private user space is 2G on Windows.
To avoid these types of problems that were already happening in version 11.1 on Windows 32bit, Oracle no longer supports ASM on Windows 32bit because the amount of services increased and there is a big limitation of memory on Windows 32bit.
This issue is very common on Windows 32bit. I have seen many database admins suffering with this problem on Windows 32bit.
*ORA-27102, OSD-00031 Unable To Extend Memory_max_target And Memory_target Past 2GB [ID 842881.1]*
To use all the features of Oracle 11.2 with Oracle Grid Infrastructure you will need more than 2Gbytes of RAM available to User address space.
On a Linux 32bit Kernel works in another way not limiting to 2GB RAM address space, for that Linux 32bit still supported Oracle Database + Grid Standalone.
I'm not suggesting you use Linux just showing the reason of Windows not being supported.
Regards,
Levi Pereira
Edited by: Levi Pereira on Mar 1, 2011 2:57 PM -
Iphone5 automatic memory increase day by day
my name is jeewn garg i using tha iphone5 white colour (32g)b (imei no is-******) using tha ios-8.1 my iphone mamery is increase day by day automatic i talk to apple custmer care in 3time but hw not solve tha problem
<Personal Information Edited by Host>From MOS ID 169706.1
Automatic Memory Management
Starting with Oracle Database 11g, the Automatic Memory Management feature requires more shared memory (/dev/shm) and file descriptors. The shared memory should be sized to be at least the greater of MEMORY_MAX_TARGET and MEMORY_TARGET for each Oracle instance on the computer. To determine the amount of shared memory available, enter the following command: # df -k /dev/shm/
Note: MEMORY_MAX_TARGET and MEMORY_TARGET cannot be used when LOCK_SGA is enabled or with huge pages on Linux -
Suggestion in increasing the SGA and PGA after increase in the RAm of the machine .
Hi All,
Need an expert suggestion on the change's in the SGA and PGA of the oracle database after increasing the size of the machines RAM .
We have 64G of RAM in the machine .
There are 9 DB's running on oracle 10g and 11g on this machine
The total SGA of all the database's is around 18G .
DB1 has 2G of MAX_SGA_SIZE
DB2 has 8G of MAX_SGA_SIZE
DB3 has 1G of MAX_SGA_SIZE
DB4 has 2G of MAX_SGA_SIZE
DB5 has 676M of MAX_SGA_SIZE
DB6 has 1.5G of MAX_SGA_SIZE
DB7 has 1.2 G of MAX_SGA_SIZE
DB8 has 675M of MAX_SGA_SIZE
DB9 has 672 M of MAX_SGA_SIZE
Now the machine RAM is up to 96G 64+ 32
what would be the suggestion on the increase in the SGA and PGA max size .
Any expert suggestion is highly apprciated .
Thanks in AdvanceThanks for this Justin.
Here the exercise is to add 4 CPU's and 32G RAM to the server earlier it was with 4 CPU's and 64G RAM this was decided by the server support team .
here is the stats from SAR command ..
The platform is
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
04:00:01 CPU %user %nice %system %iowait %steal %idle
04:10:02 all 86.47 0.00 12.52 0.99 0.00 0.02
04:20:01 all 84.83 0.01 11.80 3.21 0.00 0.15
04:30:01 all 76.23 0.00 10.30 12.34 0.00 1.12
04:40:01 all 79.14 0.00 12.07 8.30 0.00 0.49
04:50:01 all 77.63 0.00 12.19 9.40 0.00 0.77
05:01:01 all 75.95 0.00 10.50 12.80 0.00 0.75
05:11:01 all 83.21 0.00 11.98 4.54 0.00 0.26
05:21:01 all 76.37 0.01 11.20 11.49 0.00 0.94
05:31:01 all 77.97 0.00 9.04 10.30 0.00 2.69
Average: all 79.72 0.00 11.28 8.20 0.00 0.80
Now that the 4 CPU's and RAM is been added , my request is how to calculate the required kernal parameters for oracle to this new configuration for some improvement in the oracle performance .
like kernal.shmmax
kernal.shmall
/dev/shm file system
largest value of MEMORY_MAX_TARGET or MEMORY_TARGET of all instances
the oracle is set as automatic memory management . it is not an RAC environment .
Any Expert suggestion is highly apprciated .
Thanks in advance . -
Hi Experts,
I loaded HR it went well, but now I am trying to load both HR and Financial Full load. I am getting below issue for some DAC tasks:
ORA-04030: out of process memory when trying to allocate 1049100 bytes (kxs-heap-w,kllcqgf:kllsltba)
Environment:
Source System:
Win server 2008 - 2 GB RAM; EBS R12.1.1.
Target System:
Windows Server 2008 32 bit (with /3G, /PEA switches enabled) – 4 GB RAM; obiee 10g;biapps 7.9.6.2;dac 10.1.3.4.1;Informatica PC 8.6.1 Hotfix 11;Oracle DB 11.1.0.7.
DAC Tasks that failed:
TASK_GROUP_Extract_GLLinkageInformation
SDE_ORA_GL_COGS_LinkageInformation_Extract
SDE_ORA_GL_AP_LinkageInformation_Extract
SDE_ORA_GL_AR_REV_LinkageInformation_Extract
SDE_ORA_GL_PO_LinkageInformation_Extract
All the above tasks are failing with above error.
Below are the memory parameters for BI Apps database:
SQL> show parameter target
NAME TYPE VALUE
archive_lag_target integer 0
db_flashback_retention_target integer 1440
fast_start_io_target integer 0
fast_start_mttr_target integer 0
memory_max_target big integer 820M
memory_target big integer 820M
pga_aggregate_target big integer 257374182
sga_target big integer 0
I also tested by increasing below parameters:
memory_max_target 2G
memory_target 2G
pga_aggregate_target 1G
sga_target 900M
But it didn’t work, same errors.
Please let me know how can I solve this issue. Thanks for your time.The below text is from doc id, I've picked solution part
let me know updates
=====Start======================
Solution
NON-ORACLE SOFTWARE STEPS
1.If you have 4Gb or less of RAM, add more RAM to the 32-bit computer system (add another 4Gb or more, if possible).
2.Enable the /3GB in the BOOT.INI, see note:
Note 225349.1 Implementing Address Windowing Extensions (AWE) or VLM on Windows Platforms
3.If using MS-Windows Enterprise Edition, enable Physical Address Extensions (PAE) by adding the /PAE switch to the BOOT.INI, see note Note 225349.1.
NOTE: The Windows tool Perfmon should be used in ORA-4030 problems on Windows. Task Manager is not a reliable tool to investigate ORACLE.EXE process memory size.
ORACLE SOFTWARE STEPS
Steps for both Enterprise and non-Enterprise Editions of MS-Windows
1.Check for excessive INACTIVE sessions:
select sum(PGA_ALLOC_MEM) from v$process p, v$session s where p.addr = s.paddr and s.status = 'INACTIVE';
If this query returns a large value (i.e. several hundred Megabytes or even greater that 1 Gigabyte), then it is recommended that you automate the cleanup of INACTIVE sessions. To see how this works, see notes;
Note 151972.1 Dead Connection Detection (DCD) Explained
Note 159978.1 How To Automate Disconnection of Idle Sessions
Implement DCD & IDLE_TIME, by doing the following;
Set SQLNET.EXPIRE_TIME = x (minutes) in the Server SQLNET.ORA file,
Create a PROFILE with IDLE_TIME set, and assign it to users.
If you find that the processes remain with status SNIPED, then you will need to implement removal of those processes as well, see note:
Note 96170.1 Script for killing sniped sessions shadow processes
For more information, see Note 601605.1 A discussion of Dead Connection Detection, Resource Limits, V$SESSION, V$PROCESS and OS processes
2.Review Note 46001.1 and determine the pro's and con's of running ORASTACK against the ORACLE.EXE.
If appropriate, shut down the database, and run the following command in an MS-DOS window:
orastack oracle.exe 500000
Re-start the database.
Steps for Enterprise Editions of MS-Windows
Determine if using AWE would fit your database needs. This allows the Buffer Cache component in the SGA to be relocated above the 4Gb memory footprint for the ORACLE.EXE process. Since this configuration requires a virtual memory window to map memory allocations above the 4Gb memory area, this option fits best with database requirements for a 1G and up sized Buffer Cache. It would not be efficient to have a 400M Buffer Cache above the 4Gb memory footprint and yet allocate a 1Gb virtual memory window to map to that memory.
1.Decide on the size of the SGA, PGA requirements and AWE (default 1Gb), given 300Mb overhead for the ORACLE.EXE and the 3Gb memory limit (as per the BOOT.INI /3GB switch). Please note that the minimum AWE size depends on the number of CPU's, see note Note 225349.1.
- Grant OracleService<SID> the 'Lock Pages in Memory' system privilege at the operating system level, see Note 225349.1.
- If necessary, change the Address Windowing Extensions (AWE) size from the default 1Gb, see Note 225349.1.
- Adjust any of the other SGA memory settings; SHARED_POOL_SIZE, LARGE_POOL_SIZE, JAVA_POOL_SIZE & STREAMS_POOL_SIZE.
- Adjust the PGA memory setting, PGA_AGGREGATE_TARGET. NOTE: This is a target, so a decrease in this process will not directly affect the memory footprint of the ORACLE.EXE.
- Unset SGA_TARGET and/or MEMORY_TARGET (11g).
- Set USE_INDIRECT_DATA_BUFFERS=TRUE.
- Unset DB_CACHE_SIZE. Set DB_BLOCK_BUFFERS to the desired size (this will use memory above the 4Gb range).
2.Start the database.
Steps for NON-Enterprise Editions of MS-Windows
1.Decide on the size of the SGA and PGA, given 0.1Gb overhead for the ORACLE.EXE and the 3Gb memory limit (as per the BOOT.INI /3GB switch).
2.Adjust the SGA_TARGET and/or MEMORY_TARGET (11g), or use explicit settings for the SGA components and eliminate auto-tuning. NOTE: Advantages of auto-tuning are often minimal on Windows 32-bit due to memory limit issues.
3.Adjust the PGA memory setting, PGA_AGGREGATE_TARGET (optional on 11g). NOTE: This is a target, so a decrease in this process will not directly affect the memory footprint of the ORACLE.EXE.
4.Start the database. -
Oracle performance, slow for larger and more complex results.
Hello Oracle forum,
At the moment i have a Oracle database running and i'm specifically interested in the efficiency spatial extension for webmaps and GIS.
I've been testing the database with large shape files (400mb - 1gigabyte) loaded them into the database with shp2sdo->sql*loader.
Using Benchmark factory i've test the speed of transactions an these drop relatively quickly. I've started with a simple query:
SELECT id FROM map WHERE id = 3 when I increase the amount of id's to 3-10000 the performance decreases drastically.
so :
SELECT id FROM map WHERE id >=3 and id <= 10000
The explain plan shows the second query , both query's use the index.
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 9828 | 49140 | 22 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| SYS_C009650 | 9828 | 49140 | 22 (0)| 00:00:01 |
Statistics
0 recursive calls
0 db block gets
675 consistent gets
0 physical reads
0 redo size
134248 bytes sent via SQL*Net to client
7599 bytes received via SQL*Net from client
655 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
9796 rows processed
The statistics does not show very weird stuff, but maybe i'm wrong. Nothing changed in the explain plan except for the range scan instead of a unique scan.
The query returns lots of results and this is I think the reason why my measured time of the query is large. The time it takes returning large amount of rows increases quickly for more rows.
. Can this be solved? The table has been analyzed before starting the query.
The parameters of the database are not really changed from standard, I increased the amount of memory used by Oracle 11g to 1+ gigabyte.
and let the database itself decide how it uses this memory.
The system specs are and db_parameters are:
Oracle 11G
Memory Processor # of CPUs OS OS Version OS B
1.99 gb Intel(R) Core(TM)2 CPU 6600 @ 2.40GHz 2 Microsoft WindowsXP 5.2600
0=Oracle decides which value will be given
cursor_sharing EXACT
cursor_space_for_time FALSE
db_block_size 8192
db_recovery_file_dest_size 2147483648
diagnostic_dest C:\DBBENCHMARK\ORACLE
dispatchers (PROTOCOL=TCP) (SERVICE=gistestXDB)
hash_area_size 131072
log_buffer 5656576
memory_max_target 1115684864
memory_target 1048576000
open_cursors 300
parallel_max_servers 20
pga_aggregate_target 0
processes 150
resumable_timeout 2162688
sort_area_size 65536
Sga=632mb
PGA=368mb
javapool=16mb
largepool=8mb
other=8mb
So I indexed and analyzed the data what did i forget? I can speed it up with soft parsing, but the problem remains . Hopefully this enough information for some analysis, does anyone experienced the same problems ? I tested with SQLdeveloper the speed and is shows the same speed as Benchmark factory. What could be wrong with the parameters?
Thanks,
Jan Martijn
Edited by: user12227964 on 25-jan-2010 4:53
Edited by: user12227964 on 26-jan-2010 2:20Sand wrote:
select count(id) , resulted in 3669015 counted id's.
The database counted 18,345,075 rows per second without binded variables , which is ten times slower as your result. This can be possible because of hardware but my question is specifically about the number of rows returned thus large amount of results. The idea was not to compare the speed of "+select count(*)+" statements - but to illustrate that even when dealing with a huge number of rows, one can decrease the amount of I/O that needs to be performed to deal with that number of rows.
Select id from map where id <= 1
4000 rows per second are selected, Rows/sec is a meaningless measurement - due to physical I/O (PIO) versus logical I/O (LIO). You can select a 100 rows and these require PIO. Resulting in an elapsed time of 1 sec. You can select a 1000 rows that require only LIO. With an an elapsed time of 0.5 sec.
Is the 2nd method better or faster? No. It simply needed less time to be spend on I/O as the data blocks were in the buffer cache (memory) and did not require very slow and expensive disk access.
Another database i testes returns 6 times 25425 rows back per second for the same query (100 ids). What could be a parameter that limits the output speed of multiple rows in a query?.Every single row that needs to be read/processed by a SQL statement has a cost associated with it. This cost is not consistent! It differs depending on how that row can reached - what I/O paths are available to find that rows? Does the full table need to be scanned? Does an index need to be scanned? Is there a unique index that can be used? Is the table partitioned and can partitioning pruning be applied and local partition indexes used? Are there are user functions that need to be applied to the row's data? Etc. Etc.
All these together determine how fast the client gets a row from the cursor executing that SQL.
The more rows you want to process, the bigger the increase in the cost/expense - specifically more I/O. As I/O is the biggest expense (slowest ito elapsed time).
So you want to do as little I/O as possible and read as little data as possible. For example, instead of a full table scan, a fast full index scan. For example, instead of reading the complete contents of a 10GB table, reading the complete contents of a 12MB index for that table.
I suggest that you read the Oracle Performance Guide to familiarise yourself with basic performance concepts. Use http://tahiti.oracle.com for finding the the guide for your applicable Oracle version. -
"latch: row cache objects" and high "VERSION_COUNT"
Hello,
we are being faced with a situation where the database spends most of it's time waiting for latches in the shared pool (as seen in the AWR report).
All statements issued by the application are using bind variables, but what we can see in V$SQL is that even though the statements are using bind variables some of them have a relatively high version_count (> 300) and many invaliadations (100 - 200) even though the tables involved are very small (some not more than 3 or 4 rows).
Here is some (hopefully enough) information about the environment
Version: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production (on RedHat EL 5)
Parameters:
cursor_bind_capture_destination memory+disk
cursor_sharing EXACT
cursor_space_for_time FALSE
filesystemio_options none
hi_shared_memory_address 0
memory_max_target 12288M
memory_target 12288M
object_cache_optimal_size 102400
open_cursors 300
optimizer_capture_sql_plan_baselines FALSE
optimizer_dynamic_sampling 2
optimizer_features_enable 11.2.0.2
optimizer_index_caching 0
optimizer_index_cost_adj 100
optimizer_mode ALL_ROWS
optimizer_secure_view_merging TRUE
optimizer_use_invisible_indexes FALSE
optimizer_use_pending_statistics FALSE
optimizer_use_sql_plan_baselines TRUE
plsql_optimize_level 2
session_cached_cursors 50
shared_memory_address 0The shared pool size (according to AWR) is 4,832M
The buffer cache is 3,008M
Now, my question: is a version_count of > 300 a problem (we have about 10-15 of those with a total of ~7000 statements in v$sqlarea). Those are also the statements listed in the AWR report at the top in the section "SQL ordered by Version Count" and "SQL ordered by Sharable Memory"
Is it possible that those statements are causing the the latch contention in the shared pool?
I went through https://blogs.oracle.com/optimizer/entry/why_are_there_more_cursors_in_11g_for_my_query_containing_bind_variables_1
The tables involved are fairly small and all the execution plans for each cursor are identical.
I can understand some of the invalidations that happen, because we have 7 schemas that have identical tables, but from my understanding that shouldn't cause such a high invalidation number. Or am I mistaken?
I'm not that experienced with Oracle tuning at that level, so I would appreciate any pointer on how I can find out where exactly the latch problem occurs
After flushing the shared pool, the problem seems to go away for a while. But apparently that is only fighting symptoms, not fixing the root cause of the problem.
Some of the statements in question:
SELECT * FROM QRTZ_SIMPLE_TRIGGERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = :1 WHERE TRIGGER_NAME = :2 AND TRIGGER_GROUP = :3 AND TRIGGER_STATE = :4
UPDATE QRTZ_TRIGGERS SET TRIGGER_STATE = :1 WHERE JOB_NAME = :2 AND JOB_GROUP = :3 AND TRIGGER_STATE = :4
SELECT TRIGGER_STATE FROM QRTZ_TRIGGERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2
UPDATE QRTZ_SIMPLE_TRIGGERS SET REPEAT_COUNT = :1, REPEAT_INTERVAL = :2, TIMES_TRIGGERED = :3 WHERE TRIGGER_NAME = :4 AND TRIGGER_GROUP = :5
DELETE FROM QRTZ_TRIGGER_LISTENERS WHERE TRIGGER_NAME = :1 AND TRIGGER_GROUP = :2So all of them are using bind variables.
I have seen that the columns used in the where clause all have histograms available. Would removing them reduce the number of invalidations?
Unfortunately I did not save the information from v$sql_shared_cursor before the shared pool was flushed, but most of the invalidations occurred in the ROLL_INVALID_MISMATCH column if that is of any help. There are some invalidations reported for AUTH_CHECK_MISMATCH and TRANSLATION_MISMATCH but to my understanding they caused by executing the statement for different schemas if I'm not mistaken.
Looking at v$latch_missed, most of the waits for parent = 'row cache objects' are for "kqrpre: find obj" and "kqreqd: reget">
In the AWR report, what does the Dictionary Cache Stats section say?
>
Here they are:
Dictionary Cache Stats
Cache Get Requests Pct Miss Scan Reqs Mod Reqs Final Usage
dc_awr_control 65 0.00 0 2 1
dc_constraints 729 33.33 0 729 1
dc_global_oids 60 23.33 0 0 31
dc_histogram_data 7,397 10.53 0 0 2,514
dc_histogram_defs 21,797 9.83 0 0 5,239
dc_object_grants 4 25.00 0 0 12
dc_objects 27,683 2.29 0 223 2,581
dc_profiles 1,842 0.00 0 0 1
dc_rollback_segments 1,634 0.00 0 0 39
dc_segments 7,335 6.94 0 360 1,679
dc_sequences 139 5.76 0 139 19
dc_table_scns 53 100.00 0 0 0
dc_tablespace_quotas 1,956 0.10 0 0 4
dc_tablespaces 17,488 0.00 0 0 11
dc_users 58,013 0.03 0 0 164
global database name 4,261 0.00 0 0 1
outstanding_alerts 54 0.00 0 0 9
sch_lj_oids 4 0.00 0 0 2
Library Cache Activity
Namespace Get Requests Pct Miss Pin Requests Pct Miss Reloads Invalidations
ACCOUNT_STATUS 3,664 0.03 0 0 0
BODY 560 2.14 2,343 0.60 0 0
CLUSTER 52 0.00 52 0.00 0 0
DBLINK 3,668 0.00 0 0 0
EDITION 1,857 0.00 3,697 0.00 0 0
INDEX 99 19.19 99 19.19 0 0
OBJECT ID 68 100.00 0 0 0
SCHEMA 2,646 0.00 0 0 0
SQL AREA 32,996 2.26 1,142,497 0.21 189 226
SQL AREA BUILD 848 62.15 0 0 0
SQL AREA STATS 860 82.09 860 82.09 0 0
TABLE/PROCEDURE 17,713 2.62 26,112 4.88 61 0
TRIGGER 1,704 2.00 6,737 0.52 1 0
Maybe you are looking for
-
The Data Modeler is a great Tool within SQL Developer. Will there be a possibility to work with it, without installing Subversion extension in the next Version of SQL Developer?
-
Hi Gurus, i have a problem, in MM01(Basic data1) there is field called Prod.hierarchy(dataelement is prodh_d), for this data element how to create Field Exits and how to assign to MM01 Tcode. the code part i have to write is 1. To validate MARA-PRDHA
-
Inspector Levels Adjustment Problem
All of a sudden, my Aperture Inspector hub will not properly display my histogram in the Levels adjustment. All I can see is the bottom 1/2 of the histogram, but not the top where I can choose the Channel. I can see it fine in the Adjustment panel of
-
Cannot query tables in remote database
Hi, I have looked around for similar topics but have not found anything that would be close to my problem. I created a public link to a remote database from my portal database, it is a public link and I can query tables in the remote database just fi
-
hi, i have one user exit name with me like EXIT_SAPLVxxxxx. how can i see it's code. thks