Oracle RAC 10.2.0.3 increasing shared pool KQR L PO
Hi,
I've got ORA-04031 on my 4 node 10.2.0.3 Linux RAC.
The top 3 shared pool occupants are:
SQL> r
1 select * from (
2 select * from v$sgastat where pool = 'shared pool' order by 3 desc)
3 where
4* rownum <= 3
POOL NAME BYTES
shared pool KQR L PO 714319616
shared pool sql area 326563888
shared pool free memory 220592728Any idea what is KQR L PO responsible for ?
Regards.
Greg
Hi,
CauseThe shared pool is stressed and memory need to be freed for the new cursors. As a consequence, the dictionary cache is reduced in size by the LCK process causing a temporal hang of the instance since the LCK can't do other activity during that time. Since the dictionary cache is a memory area protected clusterwide in RAC, the LCK is responsible to free it in collaboration with the dictionary cache users (the sessions using cursors referenced in the dictionary cache). This process can be time consuming when the dictionary cache is big.
Solutiona. reduce the stress on the shared pool
=> by increasing it above the automatically reached value with dynamic sga, e.g.
when sga_target is set to 16G and the shared_pool_size was 6G during the hang time, set it to e.g. 8G.
=> by reducing the number of big cursors entering the shared pool, e.g. cursors using more than 1M sharable_mem e.g. via binding
select sql_text from v$sqlarea where sharable_mem > 1M;
b. reduce the dictionary cache usage in order to reduce the size of the dictionary cache, e.g.
=> when dc_histogram_defs is too high, it can point towards histograms calculations on all columns
of the tables. histograms should only be calculated on indexed columns
=> when dc_segments is high compared to dc_object_ids, it can point towards excessive partitioning usage. Reducing the partitions/subpartition usage will help reduce the dictionary cache usage to manage it.
c. set enableshared_pool_durations = false to avoid that one duration (a memory area in the shared pool used for a specific usage) need to give all space required for that usage, i.e. in case the duration containing the dictionary cache need to free memory, then that duration is extra stressed since no other type of memory from other durations can be used. Setting it to false make that any type of memory can be used to free space (i.e. any type of memory in the subpool). As a consequence, the number of subpools will be reduced by the factor of the number of durations (4 in 10gR2). Hence tuning the kghdsidxcount is advisable, e.g. increasing it to have manageable subpool sizes (see note:396940.1).
d. check patch:8666117 has been applied. This patch speedup the processing to free memory
Best regards,
Rafi.
http://rafioracledba.blogspot.com/
Similar Messages
-
Increased Shared Pool size longer hot backup time?
Hello,
I have hot backup that usually took 2 hours to complete. Then we had to increase Shared pool size from 280Mb to 380Mb due to some performance issue.
One week after increase of size the hot backup is taking 4 hours to complete. Since there was no other changes to the system I am suspecting size increase
of Shared Pool from 280Mb to 380Mb might be is reason for such drastic increase of backup time.
I would like to get your comments on my suspicion to the problem.
thank youPaul.S wrote:
One week after increase of size the hot backup is taking 4 hours to complete. Since there was no other changes to the system I am suspecting size increase of Shared Pool from 280Mb to 380Mb might be is reason for such drastic increase of backup time.Hmm... at first glance it does not seem that this is the problem.. but perhaps it contributes to it.
When a SQL hits the SQL engine, the 1st thing the SQL engine does is determine if there is an existing parsed and ready-to-use copy of that SQL. If there is, that existing SQL is re-used - thus the name SQL Shared Pool. The resulting soft parse of the SQL is a lot faster (or should be) than a hard parse . A hard parse is where there is no re-usable copy, and the SQL needs to be parsed, validated and execution plan determined, etc.
Kind of like re-using an existing compiled program (with different input data) versus compiling that program before using it. (and that is what source SQL is - a program that needs to be compiled).
Okay, now what happens if there 100's of 1000's of SQLs in your shared pool? The scan that the SQL engine does to determine if there is a re-usable cursor, becomes pretty slow. A soft parse quickly becomes very expensive.
The problem is typically clients that create SQLs that are not sharable (where the input data is hardcoded into the program, instead of input variables to the program). In other words, SQLs that do not use bind variables.
Each and every hardcoded SQL that hits the SQL engine, is now a brand new SQL. And requires storage in the SQL Shared Pool. Shared Pool footprint grows and you start getting errors like insufficient shared pool memory.. and increase the shared pool which means even more space to store even more unique non-sharable SQLs and making soft and hard parses even more expensive.
Like moving the wall a few metres further and then running even faster into it. !http://www.pprune.org/forums/images/smilies2/eusa_wall.gif!
Now if the backup fires off a load of SQLs against the SQL engine, the fast soft/hard parses can now be a lot slower than previously, thanks to a larger shared pool that now caters for more junk than before.
Unsure if this is the problem that you are experiencing.. but assuming that your suspicion is correct, it offers an explanation as to why there can be a degradation in performance. -
Increase Shared Pool for erorr # ORA-04031
hi,
what do i need to look at before i increase the shared pool of our database?
there is just the one database instance on the machine.
i am concerned about the repurcussions on the server.
i hope the information below is of help.
db version: 10.2.0.1.0
os: Red Hat Linux 3
SQL> select name, value from v$parameter where name like '%pool%';
name value
shared_pool_size 150994944
large_pool_size 33554432
java_pool_size 50331648
streams_pool_size 0
shared_pool_reserved_size 10066329
buffer_pool_keep
buffer_pool_recycle
global_context_pool_size
olap_page_pool_size 0
thanks,
santosh sewlalHi Santosh,
This is what i faced last two days back! Now i am monitoring the Issue! If you got any solutions please let me know how to avoid this!
ORA-04031 error can be due to either an inadequeate sizing of the SHARED POOL size or due to heavy
fragmentation leading the database to not finding large enough chuncks of memory.
You can monitor this with the two events...
alter system set events '4031 trace name errorstack level 3';
alter system set events '4031 trace name heapdump level 3';
Fragmentataion is one of the causes of ora 4031
Please refer these.
1.Article-ID: Note 146599.1
Title: Diagnosing and Resolving Error ORA-04031
2.Article-ID: Note 62143.1
Title: Understanding and Tuning the Shared Pool
3.Article-ID: Note 61623.1
This is paticular for Oracle 9i Rel 2, Hope the same for Oracle 10 G
Regards
Ravi -
Increase RAM do I need to increase shared pools
If I adjust my RAM by adding 12 GB with gives it 32 GB total, what do my ipc pools have to be set at? especially 10 and 40 on AIX?
What other params do I need? We have a heavy interface with batch jobs from our manafacturing floor systems.
Thanks
MikieHi,
The Virtual memory, it is physical memory ( RAM) and shared memory combined. Which is automatically allocated by the SAP System. So I suspect that we need not tune the shared memory parameters on addition of addtional physical memory.
With regards
Sudha -
Linux set up for Oracle RAC (real application cluster)
Hi Guys,
I m wrkig as Oracle DBA.
Very curious to know the initials for RAC set up at OS level.
Can anyone provide his/her usefull guidelines for the same.
Although I know all steps at OS level also, but didn't did the set up of before Oracle RAC installation.
Want to increase knowlegde on like:
--how we sahre storage.
--how we set up network (private & virtual IP) and how can check working of NIC's.
--and other required things.
Will appreciate ur help and if someone want to share his/her personal experience.
Thx in advance.[email protected] wrote:
Want to increase knowlegde on like:Here are very basic answers to very complex questions - from a pure Linux perspective running an Open Source stack and untainted kernel.
--how we sahre storage.Using multipath - this should ship with most 2.6 kernels. The kernel sees the shared storage LUNs as scsi devices - multipath does the rest. (and ASM can directly use a multipath device).
On a physical layer. Typical setup (on a RAC node) is using a HBA PCI card that runs fibre connections into a SAN switch. You can also use Infiniband (IB) as the I/O layer (as Oracle's Exadata database machine does). In this case the servers will use HCA PCI cards, run IB cables into the switch, and so will the storage array run an IB cable into the switch.
--how we set up network (private & virtual IP) and how can check working of NIC's.Depends on the achitecture choses as Interconnect. Typical choices are GigE or Infiniband (IB). Oracle's Exadata database machine (RAC) uses IB as already mentioned. (and is also our preferred Interconnect technology)
With IB you would use the OFED driver stack and have a range of ib.. commands available. These can be used to configure IP over IB (IPoIB) for use as an IP-based Interconnect, bonding of NICs, check a port's status, and so on.
--and other required things.As both Daniel and Hans indicated.. you are asking quite complex questions that require a manual (if not several) to be written in response. So best to refer to the manuals and OTN material available.
Also, if you and your company are serious about using RAC, then you should make use of Oracle's RAC Assurance group to assist you. They will provide you with starter kit information for the o/s selected. They will check every single configuration parameter afterwards and deliver a comprehensive report on what's wrong, what works and what doesn't. With recommended changes that need to be done. -
Hi All,
DB:oracle9iR2
os:solaris
how to get the shared_pool usage,free total size and hit ratio in oracle 9i R2?,can any one help to me....
POOL BYTES MB
shared pool used :
shared pool free :
shared pool (Total):
=================
Shared_pool hit ratio:
thanks.Hi All,
thank you for all the responses..
Db:oracle 9iR2
os :solaris
Actually i am facing below problem..
prob: ORA-00604: error occurred at recursive SQL level 2
ORA-04031: unable to allocate 4224 bytes of shared memory ("shared pool","select obj#,type#,ctime,mtim...","sga heap(1,0)","library ca
che")
Wed Feb 8 19:33:43 2012
Errors in file /ora/admin/cddp/bdump/cddp_cjq0_2601.trc:
ORA-00604: error occurred at recursive SQL level 2
ORA-04031: unable to allocate 4224 bytes of shared memory ("shared pool","select obj#,type#,ctime,mtim...","sga heap(1,0)","library ca
che")
Wed Feb 8 19:33:43 2012
Errors in file /ora/admin/cddp/bdump/cddp_cjq0_2601.trc:
ORA-00604: error occurred at recursive SQL level 2
ORA-04031: unable to allocate 4224 bytes of shared memory ("shared pool","select obj#,type#,ctime,mtim...","sga heap(1,0)","library ca
che")
Wed Feb 8 19:33:48 2012
Errors in file /ora/admin/cddp/bdump/cddp_cjq0_2601.trc:
ORA-00604: error occurred at recursive SQL level 2
ORA-04031: unable to allocate 4224 bytes of shared memory ("shared pool","select obj#,type#,ctime,mtim...","sga heap(1,0)","library ca
che")
========================================
i was running with 200MB size of share pool....couple of days back un expectdly i got above error....for temparory solution i did shared pool flush...
again nexday same error got repeated....for that i increased shared pool size to 420 MB....
while monitoring db it is using shared pool memory up to 400MB with avg shared pool hit ratio of 94.5 %..(database started recently)...
Earlier shared_pool size:200MB
Now:420 MB
Avg usage:up to 400MB
my question is :
1)if we have many different sql statements in shared pool ..won't
oracle flush share pool (ie aged out based on LRU or some alogorithm) if any program need memory in shared pool?
2) Any data Fragmentation will cause@above error?
3)can any one please explain..... how to check whats going on in Shared_pool(internally)...why it is using 400MB while compare to erlier avg usage 170MB .....any idea...(how to find root cause)..?
4)will plan table cause any issue ?
can any one explain to me...
thanks..
Edited by: kk001 on Feb 11, 2012 4:54 PM
Edited by: kk001 on Feb 11, 2012 4:56 PM -
What is "KQR M PO" in shared pool
Hi,
I am using oracle version 10.2.0.4. I fired below SQL.
select * from ( select pool,
name,
bytes/1024/1024 bytes
from v$sgastat
where pool='shared pool'
order by 3 desc
where rownum < 6;
POOL NAME BYTES/1024/1024
shared pool KQR M PO 404.138763
shared pool free memory 152.80159
shared pool KQR L PO 130.926659
shared pool sql area 114.5914
shared pool obj stat memo 84.535881
shared pool KQR ENQ 65.2046814
Any idea what is this "KQR M PO" in shared pool? Reason why i am asking this I am getting ora-4031 while executing below command.
drop user <user name> cascade;
So I was trying to monitor who is eating up this much space in shared pool.
Thanks in advance.
Best Regards,
oratestIf you still want to know why, poke about on Tanel Poder's blog and knowledgebase, he shows how to see all the subpool components and how to investigate these things.
X$KQR* tables describe rowcache, performance and parent/subordinate objects in the data dictionary.
select kqftanam from x$kqfta where kqftanam like 'X$KQR%'
SYS@TTST> /
KQFTANAM
X$KQRST
X$KQRPD
X$KQRSD
X$KQRFP
X$KQRFSSo wild speculation would be some bad mojo in recursive code for parent/child drop. -
Oracle RAC with QFS shared storage going down when one disk fails
Hello,
I have an oracle RAC on my testing environment. The configuration follows
nodes: V210
Shared Storage: A5200
#clrg status
Group Name Node Name Suspended Status
rac-framework-rg host1 No Online
host2 No Online
scal-racdg-rg host1 No Online
host2 No Online
scal-racfs-rg host1 No Online
host2 No Online
qfs-meta-rg host1 No Online
host2 No Offline
rac_server_proxy-rg host1 No Online
host2 No Online
#metastat -s racdg
racdg/d200: Concat/Stripe
Size: 143237376 blocks (68 GB)
Stripe 0:
Device Start Block Dbase Reloc
d3s0 0 No No
racdg/d100: Concat/Stripe
Size: 143237376 blocks (68 GB)
Stripe 0:
Device Start Block Dbase Reloc
d2s0 0 No No
#more /etc/opt/SUNWsamfs/mcf
racfs 10 ma racfs - shared
/dev/md/racdg/dsk/d100 11 mm racfs -
/dev/md/racdg/dsk/d200 12 mr racfs -
When the disk /dev/did/dsk/d2 failed (I have failed it by removing from the array), the oracle RAC went offline on both nodes, and then both nodes paniced and rebooted. Now the #clrg status shows below output.
Group Name Node Name Suspended Status
rac-framework-rg host1 No Pending online blocked
host2 No Pending online blocked
scal-racdg-rg host1 No Online
host2 No Online
scal-racfs-rg host1 No Online
host2 No Pending online blocked
qfs-meta-rg host1 No Offline
host2 No Offline
rac_server_proxy-rg host1 No Pending online blocked
host2 No Pending online blocked
crs is not started in any of the nodes. I would like to know if anybody faced this kind of a problem when using QFS on diskgroup. When one disk is failed, the oracle is not supposed to go offline as the other disk is working, and also my qfs configuration is to mirror these two disks !!!!!!!!!!!!!!
Many thanks in advance
Ushas SymonI'm not sure why you say QFS is mirroring these disks!?!? Shared QFS has no inherent mirroring capability. It relies on the underlying volume manager (VM) or array to do that for it. If you need to mirror you storage, you do it at the VM level by creating a mirrored metadevice.
Tim
--- -
How to change permissions on shared storage to install Oracle RAC on vmware
Hello:
- I am trying to install Oracle RAC using vmware.
- But when I try to change permissions for the shared storage,
the owner and group of the shared storage does not change.
- Has some else installed Oracle RAC onto vmware ?
- I am using Oracle RAC 10.2.0, solaris 5.10 x86
Thanks
JlemI have successfully installed RAC on vmware following this article, maybe you can give it a try,
Oracle 10g RAC On Linux Using VMware Server
http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnCentos4UsingVMware.php -
Oracle rac raw device as shared storage
Hi,
i m new to oracle rac,
and i wish to instlall 11g r1 RAC on my laptop having linux 4 as platform (on vmware) ,
for that i prepare 4 partition for that (on node1)
/dev/sdb1 - for ocr
/dev/sdb2 - for voting disk
/dev/sdb3 - for asmdisk group
/dev/sda5 - fro asmdisk group
by assuming external redundacy for ocr and voting disk i kept only one disk
and i configured following in /etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/sdb1 -- ocr
/dev/raw/raw2 /dev/sdb2 -- voting disk
/dev/raw/raw3 /dev/sdb3 -- asmdisk group
/dev/raw/raw4 /dev/sdb5 -- asmdisk group
and my question is how node2 can understat these raw device as shared storage?
thanks for any supporthi thanks for your suggestion ,
this may be ok for VMware , but what about for non-VMWare environment?
how can i make raw device as shared storage?
one more, all the docs that i followed on net , configured node1 partitions as shared storage.
please help me in this regards -
Is it possible to install Oracle RAC without shared storage
Dear All,
I would like to seek for your advice.
I got two different servers. We call it node 1 and node 2. And two different instances name.
Node 1 -> instance name as "ORCL1"
Node 2 -> instance name as "ORCL2"
For the system we need Oracle RAC active-active cluster mode. Our objective is to have 2 replicated databases, in other words we need 2 instances of the same database automatically replicated for 100% up time to the Application server. We have 2 separate database machines and 2 application server machines. We need our application server to connect to any of the databases at any point of time and be having a consistent data on both database machines. We only need the database to be in a cluster mode, we won't need the OS to be in a cluster. There is no shared storage in this case.
Can this be done? Please advice.you should review RAC concepts, and the meaning of instance and database
For the system we need Oracle RAC active-active cluster mode.RAC = single database with multiple instances all accessing the same shared storage, no replication involved
Our objective is to have 2 replicated databases, in other words we need 2 instances of the same database automatically replicated for 100% up time to the Application server.what you describe here is = multiple databases with multiple instances, replicated between each other
We have 2 separate database machines and 2 application server machines. We need our application server to connect to any of the databases at any point of time and be having a consistent data on both database machines. We only need the database to be in a cluster mode, we won't need the OS to be in a cluster. There is no shared storage in this case.no shared storage = no RAC
you will have two seperate databases synchronizing continuously
you can use for example Streams / Advanced Replication (with multi-master configuration)
if you dont insist on an active-active configuration, you can also use Data Guard for building a standby database -
Considering shared storage for Oracle RAC 10g
Hi, guys!
My Oracle RAC will be run on VMware ESXI 5.5. So, both 2 nodes and shared storage are on VM. Don't blame for this, I dont have another choice.
I am choosing shared storage for Oracle RAC. I am choosing between NFS and ISCSI server, both can be done in RedHat linux or FreeNAS.
Can u, guys, help me to do the choise?
RedHat or FreeNAS
ISCSI or NFS
Any help will be appreciated.JohnWatson написал(а):
NFS is really easy. Create your zero-filled files, set the ownership and access modes, and point your asm_diskstring at them. Much simpler than configuring an iSCSI target and initiators, and then messing about with ASMlib or udev.
I recorded a public lecture that (if I remember correctly) describes it here, Oracle ASM Free Tutorial
I will be using OCFS2 as cluster FS. Does it make any difference for NFS vs ISCSI? -
HOW INCREASE SGA IN ORACLE RAC 10 G WITH 2 NODES
How increase sga_max_size, sga_target in ORACLE RAC 10 G WITH 2 NODES;
i have oracle 10g in unix hp-ux 11i in rac (2 nodes)
with sga 8g; and i want to increase 12g;
i can alter these parameters without shutdown the entire database ?; , i can alter and take these change in one node first and later de second node?
i used in first node :
1- alter system set sga_max_size=16g scope spfile;
2- alter system set sga_targer=12g cope spfile;
later i restard all intance one by one:
srcvtl stop instance -d my_database -i my_instance1 -o immediate;
srcvtl start instance -d my_database -i my_instance1
3- in second node.
srcvtl stop instance -d my_database -i my_instance2 -o immediate;
srcvtl start instance -d my_database -i my_instance2
but my sga is the SAME 8G.. WHY NOT CHANGE...
i changed these parameters and restar my instance in first node later stop and start using srvctl the second node but my sga not change. continue in 8g ;however these changes are in spfile so;
prd2.sga_max_size=8589934592#internally adjusted
prd1.sga_max_size=8589934592#internally adjusted
*.sga_max_size=17179869184
prd2.sga_target=8589934592
prd1.sga_target=8589934592
*.sga_target=12884901888
prd2.thread=2
prd1.thread=1
how i can apply these change node by node or i need shutdown the entire database?
need to make these changes without affecting my application because i can not shutdown the both node...
Edited by: user568681 on 02-sep-2010 14:32Hi,
I just checked on a test RAC configuration (HP-UX, 10.2.0.4)
You don't need to stop the database.
Keep your "rolling" original scenario but change :
alter system set sga_max_size=16g scope spfile;
alter system set sga_target=12g cope spfile;by
alter system set sga_max_size=16g scope spfile sid = 'PRD1';
alter system set sga_target=12g scope spfile sid = 'PRD1';
alter system set sga_max_size=16g scope spfile sid = 'PRD2';
alter system set sga_target=12g cope spfile sid = 'PRD2';Actually
alter system set sga_max_size=16g scope spfile;
alter system set sga_max_size=16g scope spfile SID='*';changes globally the values for every instance in the spfile ("*.XXXXXX" is updated) but it does not remove the specific entries already assigned to one particular instance (and it is your case !)
Alternatively you could reset the values assigned specifically to one instance with "alter system reset" to have only "*.XXXX" for those parameters.
Best regards
Phil -
Oracle RAC binaries on vxfs shared file system
Hi,
Is it possible to install oracle binaries on vxfs cluster file system for oracle RAC under sun cluster? Because as I know we can not use vxfs cluster file system for our oracle datafiles.
TIAThe above post is incorrect. You can have a cluster (global) file system using VxVM and VxFS. You do not need to have VxVM/CVM for this. A cluster file system using VxVM+VxFS can be used for Oracle binaries but cannot be used for Oracle RAC data files where they are updated from both nodes simultaneously.
If further clarification is needed, please post.
Tim
--- -
Active session Spike on Oracle RAC 11G R2 on HP UX
Dear Experts,
We need urgent help please, as we are facing very low performance in production database.
We are having oracle 11G RAC on HP Unix environment. Following is the ADDM report. Kindly check and please help me to figure it out the issue and resolve it at earliest.
---------Instance 1---------------
ADDM Report for Task 'TASK_36650'
Analysis Period
AWR snapshot range from 11634 to 11636.
Time period starts at 21-JUL-13 07.00.03 PM
Time period ends at 21-JUL-13 09.00.49 PM
Analysis Target
Database 'MCMSDRAC' with DB ID 2894940361.
Database version 11.2.0.1.0.
ADDM performed an analysis of instance mcmsdrac1, numbered 1 and hosted at
mcmsdbl1.
Activity During the Analysis Period
Total database time was 38466 seconds.
The average number of active sessions was 5.31.
Summary of Findings
Description Active Sessions Recommendations
Percent of Activity
1 CPU Usage 1.44 | 27.08 1
2 Interconnect Latency .07 | 1.33 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Findings and Recommendations
Finding 1: CPU Usage
Impact is 1.44 active sessions, 27.08% of total activity.
Host CPU was a bottleneck and the instance was consuming 99% of the host CPU.
All wait times will be inflated by wait for CPU.
Host CPU consumption was 99%.
Recommendation 1: Host Configuration
Estimated benefit is 1.44 active sessions, 27.08% of total activity.
Action
Consider adding more CPUs to the host or adding instances serving the
database on other hosts.
Action
Session CPU consumption was throttled by the Oracle Resource Manager.
Consider revising the resource plan that was active during the analysis
period.
Finding 2: Interconnect Latency
Impact is .07 active sessions, 1.33% of total activity.
Higher than expected latency of the cluster interconnect was responsible for
significant database time on this instance.
The instance was consuming 110 kilo bits per second of interconnect bandwidth.
20% of this interconnect bandwidth was used for global cache messaging, 21%
for parallel query messaging and 7% for database lock management.
The average latency for 8K interconnect messages was 42153 microseconds.
The instance is using the private interconnect device "lan2" with IP address
172.16.200.71 and source "Oracle Cluster Repository".
The device "lan2" was used for 100% of interconnect traffic and experienced 0
send or receive errors during the analysis period.
Recommendation 1: Host Configuration
Estimated benefit is .07 active sessions, 1.33% of total activity.
Action
Investigate cause of high network interconnect latency between database
instances. Oracle's recommended solution is to use a high speed
dedicated network.
Action
Check the configuration of the cluster interconnect. Check OS setup like
adapter setting, firmware and driver release. Check that the OS's socket
receive buffers are large enough to store an entire multiblock read. The
value of parameter "db_file_multiblock_read_count" may be decreased as a
workaround.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional Information
Miscellaneous Information
Wait class "Application" was not consuming significant database time.
Wait class "Cluster" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Concurrency" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
Wait class "Network" was not consuming significant database time.
Wait class "User I/O" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
Hard parsing of SQL statements was not consuming significant database time.
The database's maintenance windows were active during 100% of the analysis
period.
----------------Instance 2 --------------------
ADDM Report for Task 'TASK_36652'
Analysis Period
AWR snapshot range from 11634 to 11636.
Time period starts at 21-JUL-13 07.00.03 PM
Time period ends at 21-JUL-13 09.00.49 PM
Analysis Target
Database 'MCMSDRAC' with DB ID 2894940361.
Database version 11.2.0.1.0.
ADDM performed an analysis of instance mcmsdrac2, numbered 2 and hosted at
mcmsdbl2.
Activity During the Analysis Period
Total database time was 2898 seconds.
The average number of active sessions was .4.
Summary of Findings
Description Active Sessions Recommendations
Percent of Activity
1 Top SQL Statements .11 | 27.65 5
2 Interconnect Latency .1 | 24.15 1
3 Shared Pool Latches .09 | 22.42 1
4 PL/SQL Execution .06 | 14.39 2
5 Unusual "Other" Wait Event .03 | 8.73 4
6 Unusual "Other" Wait Event .03 | 6.42 3
7 Unusual "Other" Wait Event .03 | 6.29 6
8 Hard Parse .02 | 5.5 0
9 Soft Parse .02 | 3.86 2
10 Unusual "Other" Wait Event .01 | 3.75 4
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Findings and Recommendations
Finding 1: Top SQL Statements
Impact is .11 active sessions, 27.65% of total activity.
SQL statements consuming significant database time were found. These
statements offer a good opportunity for performance improvement.
Recommendation 1: SQL Tuning
Estimated benefit is .05 active sessions, 12.88% of total activity.
Action
Investigate the PL/SQL statement with SQL_ID "d1s02myktu19h" for
possible performance improvements. You can supplement the information
given here with an ASH report for this SQL_ID.
Related Object
SQL statement with SQL_ID d1s02myktu19h.
begin dbms_utility.validate(:1,:2,:3,:4); end;
Rationale
The SQL Tuning Advisor cannot operate on PL/SQL statements.
Rationale
Database time for this SQL was divided as follows: 13% for SQL
execution, 2% for parsing, 85% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "d1s02myktu19h" was executed 48 times and had
an average elapsed time of 7 seconds.
Rationale
Waiting for event "library cache pin" in wait class "Concurrency"
accounted for 70% of the database time spent in processing the SQL
statement with SQL_ID "d1s02myktu19h".
Rationale
Top level calls to execute the PL/SQL statement with SQL_ID
"63wt8yna5umd6" are responsible for 100% of the database time spent on
the PL/SQL statement with SQL_ID "d1s02myktu19h".
Related Object
SQL statement with SQL_ID 63wt8yna5umd6.
begin DBMS_UTILITY.COMPILE_SCHEMA( 'TPAUSER', FALSE ); end;
Recommendation 2: SQL Tuning
Estimated benefit is .02 active sessions, 4.55% of total activity.
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"fk3bh3t41101x".
Related Object
SQL statement with SQL_ID fk3bh3t41101x.
SELECT MEM.MEMBER_CODE ,MEM.E_NAME,Pol.Policy_no
,pol.date_from,pol.date_to,POL.E_NAME,MEM.SEX,(SYSDATE-MEM.BIRTH_DATE
) AGE,POL.SCHEME_NO FROM TPAUSER.MEMBERS MEM,TPAUSER.POLICY POL WHERE
POL.QUOTATION_NO=MEM.QUOTATION_NO AND POL.BRANCH_CODE=MEM.BRANCH_CODE
and endt_no=(select max(endt_no) from tpauser.members mm where
mm.member_code=mem.member_code AND mm.QUOTATION_NO=MEM.QUOTATION_NO)
and member_code like '%' || nvl(:1,null) ||'%' ORDER BY MEMBER_CODE
Rationale
The SQL spent 92% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "fk3bh3t41101x" was executed 14 times and had
an average elapsed time of 4.9 seconds.
Rationale
At least one execution of the statement ran in parallel.
Recommendation 3: SQL Tuning
Estimated benefit is .02 active sessions, 3.79% of total activity.
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"7mhjbjg9ntqf5".
Related Object
SQL statement with SQL_ID 7mhjbjg9ntqf5.
SELECT SUM(CNT) FROM (SELECT COUNT(PROC_CODE) CNT FROM
TPAUSER.TORBINY_PROCEDURE WHERE BRANCH_CODE = :B6 AND QUOTATION_NO =
:B5 AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND PR_EFFECTIVE_DATE<=
:B2 AND PROC_CODE = :B1 UNION SELECT COUNT(MED_CODE) CNT FROM
TPAUSER.TORBINY_MEDICINE WHERE BRANCH_CODE = :B6 AND QUOTATION_NO =
:B5 AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND M_EFFECTIVE_DATE<= :B2
AND MED_CODE = :B1 UNION SELECT COUNT(LAB_CODE) CNT FROM
TPAUSER.TORBINY_LAB WHERE BRANCH_CODE = :B6 AND QUOTATION_NO = :B5
AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND L_EFFECTIVE_DATE<= :B2 AND
LAB_CODE = :B1 )
Rationale
The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 0% for SQL execution,
0% for parsing, 100% for PL/SQL execution and 0% for Java execution.
Rationale
SQL statement with SQL_ID "7mhjbjg9ntqf5" was executed 31 times and had
an average elapsed time of 3.4 seconds.
Rationale
Top level calls to execute the SELECT statement with SQL_ID
"a11nzdnd91gsg" are responsible for 100% of the database time spent on
the SELECT statement with SQL_ID "7mhjbjg9ntqf5".
Related Object
SQL statement with SQL_ID a11nzdnd91gsg.
SELECT POLICY_NO,SCHEME_NO FROM TPAUSER.POLICY WHERE QUOTATION_NO
=:B1
Recommendation 4: SQL Tuning
Estimated benefit is .01 active sessions, 3.03% of total activity.
Action
Investigate the SELECT statement with SQL_ID "4uqs4jt7aca5s" for
possible performance improvements. You can supplement the information
given here with an ASH report for this SQL_ID.
Related Object
SQL statement with SQL_ID 4uqs4jt7aca5s.
SELECT DISTINCT USER_ID FROM GV$SESSION, USERS WHERE UPPER (USERNAME)
= UPPER (USER_ID) AND USERS.APPROVAL_CLAIM='VC' AND USER_ID=:B1
Rationale
The SQL spent only 0% of its database time on CPU, I/O and Cluster
waits. Therefore, the SQL Tuning Advisor is not applicable in this case.
Look at performance data for the SQL to find potential improvements.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "4uqs4jt7aca5s" was executed 261 times and had
an average elapsed time of 0.35 seconds.
Rationale
At least one execution of the statement ran in parallel.
Rationale
Top level calls to execute the PL/SQL statement with SQL_ID
"91vt043t78460" are responsible for 100% of the database time spent on
the SELECT statement with SQL_ID "4uqs4jt7aca5s".
Related Object
SQL statement with SQL_ID 91vt043t78460.
begin TPAUSER.RECEIVE_NEW_FAX_APRROVAL(:V00001,:V00002,:V00003,:V0000
4); end;
Recommendation 5: SQL Tuning
Estimated benefit is .01 active sessions, 3.03% of total activity.
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"7kt28fkc0yn5f".
Related Object
SQL statement with SQL_ID 7kt28fkc0yn5f.
SELECT COUNT(*) FROM TPAUSER.APPROVAL_MASTER WHERE APPROVAL_STATUS IS
NULL AND (UPPER(CODED) = UPPER(:B1 ) OR UPPER(PROCESSED_BY) =
UPPER(:B1 ))
Rationale
The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "7kt28fkc0yn5f" was executed 1034 times and
had an average elapsed time of 0.063 seconds.
Rationale
Top level calls to execute the PL/SQL statement with SQL_ID
"91vt043t78460" are responsible for 100% of the database time spent on
the SELECT statement with SQL_ID "7kt28fkc0yn5f".
Related Object
SQL statement with SQL_ID 91vt043t78460.
begin TPAUSER.RECEIVE_NEW_FAX_APRROVAL(:V00001,:V00002,:V00003,:V0000
4); end;
Finding 2: Interconnect Latency
Impact is .1 active sessions, 24.15% of total activity.
Higher than expected latency of the cluster interconnect was responsible for
significant database time on this instance.
The instance was consuming 128 kilo bits per second of interconnect bandwidth.
17% of this interconnect bandwidth was used for global cache messaging, 6% for
parallel query messaging and 8% for database lock management.
The average latency for 8K interconnect messages was 41863 microseconds.
The instance is using the private interconnect device "lan2" with IP address
172.16.200.72 and source "Oracle Cluster Repository".
The device "lan2" was used for 100% of interconnect traffic and experienced 0
send or receive errors during the analysis period.
Recommendation 1: Host Configuration
Estimated benefit is .1 active sessions, 24.15% of total activity.
Action
Investigate cause of high network interconnect latency between database
instances. Oracle's recommended solution is to use a high speed
dedicated network.
Action
Check the configuration of the cluster interconnect. Check OS setup like
adapter setting, firmware and driver release. Check that the OS's socket
receive buffers are large enough to store an entire multiblock read. The
value of parameter "db_file_multiblock_read_count" may be decreased as a
workaround.
Symptoms That Led to the Finding:
Inter-instance messaging was consuming significant database time on this
instance.
Impact is .06 active sessions, 14.23% of total activity.
Wait class "Cluster" was consuming significant database time.
Impact is .06 active sessions, 14.23% of total activity.
Finding 3: Shared Pool Latches
Impact is .09 active sessions, 22.42% of total activity.
Contention for latches related to the shared pool was consuming significant
database time.
Waits for "library cache lock" amounted to 5% of database time.
Waits for "library cache pin" amounted to 17% of database time.
Recommendation 1: Application Analysis
Estimated benefit is .09 active sessions, 22.42% of total activity.
Action
Investigate the cause for latch contention using the given blocking
sessions or modules.
Rationale
The session with ID 17 and serial number 15595 in instance number 1 was
the blocking session responsible for 34% of this recommendation's
benefit.
Symptoms That Led to the Finding:
Wait class "Concurrency" was consuming significant database time.
Impact is .1 active sessions, 24.96% of total activity.
Finding 4: PL/SQL Execution
Impact is .06 active sessions, 14.39% of total activity.
PL/SQL execution consumed significant database time.
Recommendation 1: SQL Tuning
Estimated benefit is .05 active sessions, 12.5% of total activity.
Action
Tune the entry point PL/SQL "SYS.DBMS_UTILITY.COMPILE_SCHEMA" of type
"PACKAGE" and ID 6019. Refer to the PL/SQL documentation for addition
information.
Rationale
318 seconds spent in executing PL/SQL "SYS.DBMS_UTILITY.VALIDATE#2" of
type "PACKAGE" and ID 6019.
Recommendation 2: SQL Tuning
Estimated benefit is .01 active sessions, 1.89% of total activity.
Action
Tune the entry point PL/SQL
"SYSMAN.EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS" of type "PACKAGE" and
ID 68654. Refer to the PL/SQL documentation for addition information.
Finding 5: Unusual "Other" Wait Event
Impact is .03 active sessions, 8.73% of total activity.
Wait event "DFS lock handle" in wait class "Other" was consuming significant
database time.
Recommendation 1: Application Analysis
Estimated benefit is .03 active sessions, 8.73% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits. Refer to
Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .03 active sessions, 8.27% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits in Service
"mcmsdrac".
Recommendation 3: Application Analysis
Estimated benefit is .02 active sessions, 5.05% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits in Module "TOAD
9.7.2.5".
Recommendation 4: Application Analysis
Estimated benefit is .01 active sessions, 3.21% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits in Module
"toad.exe".
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
Finding 6: Unusual "Other" Wait Event
Impact is .03 active sessions, 6.42% of total activity.
Wait event "reliable message" in wait class "Other" was consuming significant
database time.
Recommendation 1: Application Analysis
Estimated benefit is .03 active sessions, 6.42% of total activity.
Action
Investigate the cause for high "reliable message" waits. Refer to
Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .03 active sessions, 6.42% of total activity.
Action
Investigate the cause for high "reliable message" waits in Service
"mcmsdrac".
Recommendation 3: Application Analysis
Estimated benefit is .02 active sessions, 4.13% of total activity.
Action
Investigate the cause for high "reliable message" waits in Module "TOAD
9.7.2.5".
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
Finding 7: Unusual "Other" Wait Event
Impact is .03 active sessions, 6.29% of total activity.
Wait event "enq: PS - contention" in wait class "Other" was consuming
significant database time.
Recommendation 1: Application Analysis
Estimated benefit is .03 active sessions, 6.29% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits. Refer to
Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .02 active sessions, 6.02% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits in Service
"mcmsdrac".
Recommendation 3: Application Analysis
Estimated benefit is .02 active sessions, 4.93% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits with
P1,P2,P3 ("name|mode, instance, slave ID") values "1347616774", "1" and
"3599" respectively.
Recommendation 4: Application Analysis
Estimated benefit is .01 active sessions, 2.74% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits in Module
"Inbox Reader_92.exe".
Recommendation 5: Application Analysis
Estimated benefit is .01 active sessions, 2.74% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits in Module
"TOAD 9.7.2.5".
Recommendation 6: Application Analysis
Estimated benefit is .01 active sessions, 1.37% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits with
P1,P2,P3 ("name|mode, instance, slave ID") values "1347616774", "1" and
"3598" respectively.
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
Finding 8: Hard Parse
Impact is .02 active sessions, 5.5% of total activity.
Hard parsing of SQL statements was consuming significant database time.
Hard parses due to cursor environment mismatch were not consuming significant
database time.
Hard parsing SQL statements that encountered parse errors was not consuming
significant database time.
Hard parses due to literal usage and cursor invalidation were not consuming
significant database time.
The Oracle instance memory (SGA and PGA) was adequately sized.
No recommendations are available.
Symptoms That Led to the Finding:
Contention for latches related to the shared pool was consuming
significant database time.
Impact is .09 active sessions, 22.42% of total activity.
Wait class "Concurrency" was consuming significant database time.
Impact is .1 active sessions, 24.96% of total activity.
Finding 9: Soft Parse
Impact is .02 active sessions, 3.86% of total activity.
Soft parsing of SQL statements was consuming significant database time.
Recommendation 1: Application Analysis
Estimated benefit is .02 active sessions, 3.86% of total activity.
Action
Investigate application logic to keep open the frequently used cursors.
Note that cursors are closed by both cursor close calls and session
disconnects.
Recommendation 2: Database Configuration
Estimated benefit is .02 active sessions, 3.86% of total activity.
Action
Consider increasing the session cursor cache size by increasing the
value of parameter "session_cached_cursors".
Rationale
The value of parameter "session_cached_cursors" was "100" during the
analysis period.
Symptoms That Led to the Finding:
Contention for latches related to the shared pool was consuming
significant database time.
Impact is .09 active sessions, 22.42% of total activity.
Wait class "Concurrency" was consuming significant database time.
Impact is .1 active sessions, 24.96% of total activity.
Finding 10: Unusual "Other" Wait Event
Impact is .01 active sessions, 3.75% of total activity.
Wait event "IPC send completion sync" in wait class "Other" was consuming
significant database time.
Recommendation 1: Application Analysis
Estimated benefit is .01 active sessions, 3.75% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits. Refer
to Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .01 active sessions, 3.75% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits with P1
("send count") value "1".
Recommendation 3: Application Analysis
Estimated benefit is .01 active sessions, 2.59% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits in
Service "mcmsdrac".
Recommendation 4: Application Analysis
Estimated benefit is .01 active sessions, 1.73% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits in
Module "TOAD 9.7.2.5".
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional Information
Miscellaneous Information
Wait class "Application" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
CPU was not a bottleneck for the instance.
Wait class "Network" was not consuming significant database time.
Wait class "User I/O" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
The database's maintenance windows were active during 100% of the analysis
period.
Please help.Hello experts...
Please do the needful... It's really very urgent.
Thanks,
Syed
Maybe you are looking for
-
Mail Not Working After 10.9.4 Update
Is anyone else having mail issues after the 10.9.4 update this week? Neither my iCloud or Cox email account is able to send or receive email after this update. The small exclamation mark appears to the right of each account, both for incoming and se
-
Adding new field in ES55 transaction
Hi experts 1)I have to add new input field in transaction ES55( second screen where address and all other details are coming )to input Business Partener Number . I tried to do so using user exit I tried all function modules of XES55 function group bu
-
Hi GUrus, Can any one please let me know all the Technical HR ABAP Topics available ? So that it will be very easy for me to learn HR ABAP Technical. Ashok...
-
What is After Effects and Magic Lantern?
Hi. I'm new to all of this, so I am asking at least twenty questions today. I'm curious if Adobe After Effects is used independantly of Premier or if it can be used as a plug in? Also, just wondering what Magic Lantern is. I believe this is somethi
-
Make a background photo that will resize to the users browser size
I'm trying to create an effect like this in Iweb: http://neistatbrothers.com/ You'll notice that the background image resizes to the browser size. I have read threads like that say to edit the HTML from "Scroll" to "Fixed"... that doesn't seem work.