RAC to Begin
Hi RAC experts,
After few experiences in Oracle 10G single instance, I started to learn about RAC 10G and 11G;
At my work, I am only using Oracle single instances, and no way for me to learn about Oracle RAC; just because i cant buy different computers and then I have to interconnect them etc....
I know I can use virualization in order to install RAC on windows, BUT, I only found one article written by Tim to setup this ON linux but NOt on windows,
Can someone help with a good link to step by step install, configure virutalization and then install RAC 10G or 11G on Windows XP 32 Bit?
Cheers,
Edited by: Ora-Wiss on Apr 2, 2010 9:46 PM
Tim's link is quite elaborative and good for the same,
http://oracle-base.com/articles/rac/ArticlesRac.php
HTH
Aman....
Similar Messages
-
10g performance issue + RAC
Hi,
we are running oracle e-business suite 111 with 10g database and RAC.
Please let us know what can be tuned, following is a AWR report statistics:
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 2904 20-Jan-09 08:30:17 362 55.3
End Snap: 2905 20-Jan-09 09:30:43 525 79.0
Elapsed: 60.42 (mins)
DB Time: 214.04 (mins)
Report Summary
Cache Sizes
Begin End
Buffer Cache: 2,992M 2,896M Std Block Size: 8K
Shared Pool Size: 1,008M 1,104M Log Buffer: 14,352K
Load Profile
Per Second Per Transaction
Redo size: 95,171.90 7,079.15
Logical reads: 83,587.58 6,217.48
Block changes: 696.50 51.81
Physical reads: 163.22 12.14
Physical writes: 18.76 1.40
User calls: 429.26 31.93
Parses: 156.87 11.67
Hard parses: 5.60 0.42
Sorts: 146.05 10.86
Logons: 0.88 0.07
Executes: 991.77 73.77
Transactions: 13.44
% Blocks changed per Read: 0.83 Recursive Call %: 83.42
Rollback per transaction %: 39.30 Rows per Sort: 10.42
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 99.81 In-memory Sort %: 100.00
Library Hit %: 98.77 Soft Parse %: 96.43
Execute to Parse %: 84.18 Latch Hit %: 99.90
Parse CPU to Parse Elapsd %: 17.28 % Non-Parse CPU: 99.64
Shared Pool Statistics
Begin End
Memory Usage %: 75.51 84.38
% SQL with executions>1: 92.27 86.87
% Memory for SQL w/exec>1: 90.30 88.03
Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 5,703 44.4
db file sequential read 399,573 1,732 4 13.5 User I/O
gc current block 2-way 380,370 846 2 6.6 Cluster
gc cr grant 2-way 202,084 325 2 2.5 Cluster
gc cr multi block request 273,702 315 1 2.5 Cluster
RAC Statistics
Begin End
Number of Instances: 2 2
Global Cache Load Profile
Per Second Per Transaction
Global Cache blocks received: 207.45 15.43
Global Cache blocks served: 201.99 15.02
GCS/GES messages received: 491.52 36.56
GCS/GES messages sent: 592.97 44.11
DBWR Fusion writes: 3.18 0.24
Estd Interconnect traffic (KB) 3,487.35
Global Cache Efficiency Percentages (Target local+remote 100%)
Buffer access - local cache %: 99.56
Buffer access - remote cache %: 0.25
Buffer access - disk %: 0.19
Global Cache and Enqueue Services - Workload Characteristics
Avg global enqueue get time (ms): 0.6
Avg global cache cr block receive time (ms): 3.1
Avg global cache current block receive time (ms): 3.4
Avg global cache cr block build time (ms): 0.0
Avg global cache cr block send time (ms): 0.0
Global cache log flushes for cr blocks served %: 12.5
Avg global cache cr block flush time (ms): 3.4
Avg global cache current block pin time (ms): 0.0
Avg global cache current block send time (ms): 0.0
Global cache log flushes for current blocks served %: 0.0
Avg global cache current block flush time (ms): 1.0
Global Cache and Enqueue Services - Messaging Statistics
Avg message sent queue time (ms): 0.1
Avg message sent queue time on ksxp (ms): 1.5
Avg message received queue time (ms): 0.0
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.0
% of direct sent messages: 46.02
% of indirect sent messages: 49.25
% of flow controlled messages: 4.73
Regards
AroraAchyot,
Apart from the comments by Justin a few other general comments
- RAC is mainly suited for OLTP
- if your OLTP application doesn't scale in a single instance configuration, RAC will make this worse. You may want to tune your application without RAC first (set CLUSTER_DATABASE to FALSE)
- You need to find out whether the delays are resulting from a badly configured Interconnect. This will show up as events starting with 'gc', above all 'gc cache cr request'
Hth
Sybrand Bakker
Senior Oracle DBA -
Hello all,
We Have Oracle EBS R12.
There are many invoices already entered to the application.
There are more than 1000 Check (Receipts) already entered in the application, also there are 1000 Credit memo's Entered in the application.
As there are no enough time to apply them one by one, Is there any thing (request, API ...) that can help in Applying those Receipts with invoices, and those
Credit Memo's with invoices.
As for applying receipts with invoice i tried the following procedure but it there are problem when applying 1 receipt with more than invoices, example
receipt = 7000, invoice1=5000, invoice2= 3000. It apply 5000 with invoice1, but the remaining = 2000 doesn't apply with invoice2.
CREATE OR REPLACE PROCEDURE xx_ar_receipt_apply_b (
cr_rec_id IN NUMBER,
rec_amount IN NUMBER,
rec_no IN VARCHAR2,
cus_id IN NUMBER
AS
l_return_status VARCHAR2 (1);
l_installment NUMBER;
l_msg_count NUMBER;
l_msg_data VARCHAR2 (240);
cash_receipt_id NUMBER;
inv_no NUMBER;
inv_date DATE;
inv_amount NUMBER;
new_amt NUMBER;
rec_amt NUMBER;
payment_id NUMBER;
--- Define cursor c ---------
CURSOR c
IS
SELECT ct.trx_number inv_no, rac.amount_due_remaining inv_amount,
rac.payment_schedule_id payment_id, ct.trx_date inv_date,
rac.terms_sequence_number l_installment
FROM ar_payment_schedules_all rac, ra_customer_trx_all ct
WHERE rac.customer_trx_id = ct.customer_trx_id
AND ct.bill_to_customer_id = cus_id
AND RAC.CLASS = 'INV'
AND rac.amount_due_remaining > 0
ORDER BY ct.trx_date, rac.payment_schedule_id;
BEGIN
DBMS_APPLICATION_INFO.set_client_info (102);
fnd_global.apps_initialize (1374, 20678, 222);
mo_global.init ('AR');
mo_global.set_policy_context ('S', 102);
new_amt := 1000;
FOR REC_INV IN C
LOOP
IF new_amt > REC_INV.inv_amount
THEN
BEGIN
ar_receipt_api_pub.APPLY
(p_api_version => 1.0,
p_init_msg_list => fnd_api.g_true,
p_commit => fnd_api.g_true,
p_validation_level => fnd_api.g_valid_level_full,
x_return_status => l_return_status,
x_msg_count => l_msg_count,
x_msg_data => l_msg_data,
p_receipt_number => rec_no,
p_cash_receipt_id => cr_rec_id,
p_trx_number => REC_INV.inv_no,
p_amount_applied => REC_INV.inv_amount,
p_installment => REC_INV.l_installment,
p_applied_payment_schedule_id => REC_INV.payment_id
new_amt := new_amt - inv_amount;
END;
END IF;
IF new_amt <= REC_INV.inv_amount
THEN
BEGIN
ar_receipt_api_pub.APPLY
(p_api_version => 1.0,
p_init_msg_list => fnd_api.g_true,
p_commit => fnd_api.g_true,
p_validation_level => fnd_api.g_valid_level_full,
x_return_status => l_return_status,
x_msg_count => l_msg_count,
x_msg_data => l_msg_data,
p_receipt_number => rec_no,
p_cash_receipt_id => cr_rec_id,
p_trx_number => REC_INV.inv_no,
p_amount_applied => new_amt,
p_installment => REC_INV.l_installment,
p_applied_payment_schedule_id => REC_INV.payment_id
new_amt := 0;
END;
END IF;
IF new_amt = 0
THEN
EXIT;
END IF;
END LOOP;
END LOOP;
END IF;
arp_standard.disable_debug;
END;
Thanks in advance.
Edited by: 858923 on May 14, 2011 12:56 PMhow do you create your receipt? if you are using lockbox, you can specify invoice number(s) so that the receipt will be applied to the invoices after creation.
-
How to set the correct shared pool size and db_buffer_cache using awr
Hi All,
I want to how to set the correct size for shared_pool_size and db_cache_size using shared pool advisory and buffer pool advisory of awr report. I have paste the shared and buffer pool advisory of awr report.
Shared Pool Advisory
* SP: Shared Pool Est LC: Estimated Library Cache Factr: Factor
* Note there is often a 1:Many correlation between a single logical object in the Library Cache, and the physical number of memory objects associated with it. Therefore comparing the number of Lib Cache objects (e.g. in v$librarycache), with the number of Lib Cache Memory Objects is invalid.
Shared Pool Size(M) SP Size Factr Est LC Size (M) Est LC Mem Obj Est LC Time Saved (s) Est LC Time Saved Factr Est LC Load Time (s) Est LC Load Time Factr Est LC Mem Obj Hits (K)
4,096 1.00 471 25,153 184,206 1.00 149 1.00 9,069
4,736 1.16 511 27,328 184,206 1.00 149 1.00 9,766
5,248 1.28 511 27,346 184,206 1.00 149 1.00 9,766
5,760 1.41 511 27,346 184,206 1.00 149 1.00 9,766
6,272 1.53 511 27,346 184,206 1.00 149 1.00 9,766
6,784 1.66 511 27,346 184,206 1.00 149 1.00 9,766
7,296 1.78 511 27,346 184,206 1.00 149 1.00 9,766
7,808 1.91 511 27,346 184,206 1.00 149 1.00 9,766
8,320 2.03 511 27,346 184,206 1.00 149 1.00 9,766
Buffer Pool Advisory
* Only rows with estimated physical reads >0 are displayed
* ordered by Block Size, Buffers For Estimate
P Size for Est (M) Size Factor Buffers (thousands) Est Phys Read Factor Estimated Phys Reads (thousands) Est Phys Read Time Est %DBtime for Rds
D 4,096 0.10 485 1.02 1,002 1 0.00
D 8,192 0.20 970 1.00 987 1 0.00
D 12,288 0.30 1,454 1.00 987 1 0.00
D 16,384 0.40 1,939 1.00 987 1 0.00
D 20,480 0.50 2,424 1.00 987 1 0.00
D 24,576 0.60 2,909 1.00 987 1 0.00
D 28,672 0.70 3,394 1.00 987 1 0.00
D 32,768 0.80 3,878 1.00 987 1 0.00
D 36,864 0.90 4,363 1.00 987 1 0.00
D 40,960 1.00 4,848 1.00 987 1 0.00
D 45,056 1.10 5,333 1.00 987 1 0.00
D 49,152 1.20 5,818 1.00 987 1 0.00
D 53,248 1.30 6,302 1.00 987 1 0.00
D 57,344 1.40 6,787 1.00 987 1 0.00
D 61,440 1.50 7,272 1.00 987 1 0.00
D 65,536 1.60 7,757 1.00 987 1 0.00
D 69,632 1.70 8,242 1.00 987 1 0.00
D 73,728 1.80 8,726 1.00 987 1 0.00
D 77,824 1.90 9,211 1.00 987 1 0.00
D 81,920 2.00 9,696 1.00 987 1 0.00
My shared pool size is 4gb and db_cache_size is 40Gb.
Please help me in configuring the correct size for this.
Thanks and Regards,Hi ,
Actually batch load is taking too much time.
Please find below the 1 hr awr report
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 6557 27-Nov-11 16:00:06 126 1.3
End Snap: 6558 27-Nov-11 17:00:17 130 1.6
Elapsed: 60.17 (mins)
DB Time: 34.00 (mins)
Report Summary
Cache Sizes
Begin End
Buffer Cache: 40,960M 40,960M Std Block Size: 8K
Shared Pool Size: 4,096M 4,096M Log Buffer: 25,908K
Load Profile
Per Second Per Transaction Per Exec Per Call
DB Time(s): 0.6 1.4 0.00 0.07
DB CPU(s): 0.5 1.2 0.00 0.06
Redo size: 281,296.9 698,483.4
Logical reads: 20,545.6 51,016.4
Block changes: 1,879.5 4,667.0
Physical reads: 123.7 307.2
Physical writes: 66.4 164.8
User calls: 8.2 20.4
Parses: 309.4 768.4
Hard parses: 8.5 21.2
W/A MB processed: 1.7 4.3
Logons: 0.7 1.6
Executes: 1,235.9 3,068.7
Rollbacks: 0.0 0.0
Transactions: 0.4
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.66 In-memory Sort %: 100.00
Library Hit %: 99.19 Soft Parse %: 97.25
Execute to Parse %: 74.96 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 92.41 % Non-Parse CPU: 98.65
Shared Pool Statistics
Begin End
Memory Usage %: 80.33 82.01
% SQL with executions>1: 90.90 86.48
% Memory for SQL w/exec>1: 90.10 86.89
Top 5 Timed Foreground Events
Event Waits Time(s) Avg wait (ms) % DB time Wait Class
DB CPU 1,789 87.72
db file sequential read 27,531 50 2 2.45 User I/O
db file scattered read 26,322 30 1 1.47 User I/O
row cache lock 1,798 20 11 0.96 Concurrency
OJVM: Generic 36 15 421 0.74 Other
Host CPU (CPUs: 24 Cores: 12 Sockets: )
Load Average Begin Load Average End %User %System %WIO %Idle
0.58 1.50 2.8 0.7 0.1 96.6
Instance CPU
%Total CPU %Busy CPU %DB time waiting for CPU (Resource Manager)
2.2 63.6 0.0
Memory Statistics
Begin End
Host Mem (MB): 131,072.0 131,072.0
SGA use (MB): 50,971.4 50,971.4
PGA use (MB): 545.5 1,066.3
% Host Mem used for SGA+PGA: 39.30 39.70
RAC Statistics
Begin End
Number of Instances: 2 2
Global Cache Load Profile
Per Second Per Transaction
Global Cache blocks received: 3.09 7.68
Global Cache blocks served: 1.86 4.62
GCS/GES messages received: 78.64 195.27
GCS/GES messages sent: 53.82 133.65
DBWR Fusion writes: 0.52 1.30
Estd Interconnect traffic (KB) 65.50
Global Cache Efficiency Percentages (Target local+remote 100%)
Buffer access - local cache %: 99.65
Buffer access - remote cache %: 0.02
Buffer access - disk %: 0.34
Global Cache and Enqueue Services - Workload Characteristics
Avg global enqueue get time (ms): 0.0
Avg global cache cr block receive time (ms): 1.7
Avg global cache current block receive time (ms): 1.0
Avg global cache cr block build time (ms): 0.0
Avg global cache cr block send time (ms): 0.0
Global cache log flushes for cr blocks served %: 1.4
Avg global cache cr block flush time (ms): 0.9
Avg global cache current block pin time (ms): 0.0
Avg global cache current block send time (ms): 0.0
Global cache log flushes for current blocks served %: 0.1
Avg global cache current block flush time (ms): 0.0
Global Cache and Enqueue Services - Messaging Statistics
Avg message sent queue time (ms): 0.0
Avg message sent queue time on ksxp (ms): 0.4
Avg message received queue time (ms): 0.5
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.0
% of direct sent messages: 79.13
% of indirect sent messages: 17.10
% of flow controlled messages: 3.77
Cluster Interconnect
Begin End
Interface IP Address Pub Source IP Pub Src
en9 10.51.10.61 N Oracle Cluster Repository
Main Report
* Report Summary
* Wait Events Statistics
* SQL Statistics
* Instance Activity Statistics
* IO Stats
* Buffer Pool Statistics
* Advisory Statistics
* Wait Statistics
* Undo Statistics
* Latch Statistics
* Segment Statistics
* Dictionary Cache Statistics
* Library Cache Statistics
* Memory Statistics
* Streams Statistics
* Resource Limit Statistics
* Shared Server Statistics
* init.ora Parameters
More RAC Statistics
* RAC Report Summary
* Global Messaging Statistics
* Global CR Served Stats
* Global CURRENT Served Stats
* Global Cache Transfer Stats
* Interconnect Stats
* Dynamic Remastering Statistics
Back to Top
Statistic Name Time (s) % of DB Time
sql execute elapsed time 1,925.20 94.38
DB CPU 1,789.38 87.72
connection management call elapsed time 99.65 4.89
PL/SQL execution elapsed time 89.81 4.40
parse time elapsed 46.32 2.27
hard parse elapsed time 25.01 1.23
Java execution elapsed time 21.24 1.04
PL/SQL compilation elapsed time 11.92 0.58
failed parse elapsed time 9.37 0.46
hard parse (sharing criteria) elapsed time 8.71 0.43
sequence load elapsed time 0.06 0.00
repeated bind elapsed time 0.02 0.00
hard parse (bind mismatch) elapsed time 0.01 0.00
DB time 2,039.77
background elapsed time 122.00
background cpu time 113.42
Statistic Value End Value
NUM_LCPUS 0
NUM_VCPUS 0
AVG_BUSY_TIME 12,339
AVG_IDLE_TIME 348,838
AVG_IOWAIT_TIME 221
AVG_SYS_TIME 2,274
AVG_USER_TIME 9,944
BUSY_TIME 299,090
IDLE_TIME 8,375,051
IOWAIT_TIME 6,820
SYS_TIME 57,512
USER_TIME 241,578
LOAD 1 2
OS_CPU_WAIT_TIME 312,200
PHYSICAL_MEMORY_BYTES 137,438,953,472
NUM_CPUS 24
NUM_CPU_CORES 12
GLOBAL_RECEIVE_SIZE_MAX 1,310,720
GLOBAL_SEND_SIZE_MAX 1,310,720
TCP_RECEIVE_SIZE_DEFAULT 16,384
TCP_RECEIVE_SIZE_MAX 9,223,372,036,854,775,807
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 9,223,372,036,854,775,807
TCP_SEND_SIZE_MIN 4,096
Back to Wait Events Statistics
Back to Top
Operating System Statistics - Detail
Snap Time Load %busy %user %sys %idle %iowait
27-Nov 16:00:06 0.58
27-Nov 17:00:17 1.50 3.45 2.79 0.66 96.55 0.08
Back to Wait Events Statistics
Back to Top
Foreground Wait Class
* s - second, ms - millisecond - 1000th of a second
* ordered by wait time desc, waits desc
* %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
* Captured Time accounts for 95.7% of Total DB time 2,039.77 (s)
* Total FG Wait Time: 163.14 (s) DB CPU time: 1,789.38 (s)
Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) %DB time
DB CPU 1,789 87.72
User I/O 61,229 0 92 1 4.49
Other 102,743 40 31 0 1.50
Concurrency 3,169 10 24 7 1.16
Cluster 58,920 0 11 0 0.52
System I/O 45,407 0 6 0 0.29
Configuration 107 7 1 5 0.03
Commit 383 0 0 1 0.01
Network 15,275 0 0 0 0.00
Application 52 8 0 0 0.00
Back to Wait Events Statistics
Back to Top
Foreground Wait Events
* s - second, ms - millisecond - 1000th of a second
* Only events with Total Wait Time (s) >= .001 are shown
* ordered by wait time desc, waits desc (idle events last)
* %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn % DB time
db file sequential read 27,531 0 50 2 18.93 2.45
db file scattered read 26,322 0 30 1 18.10 1.47
row cache lock 1,798 0 20 11 1.24 0.96
OJVM: Generic 36 42 15 421 0.02 0.74
db file parallel read 394 0 7 19 0.27 0.36
control file sequential read 22,248 0 6 0 15.30 0.28
reliable message 4,439 0 4 1 3.05 0.18
gc current grant busy 7,597 0 3 0 5.22 0.16
PX Deq: Slave Session Stats 2,661 0 3 1 1.83 0.16
DFS lock handle 3,208 0 3 1 2.21 0.16
direct path write temp 4,842 0 3 1 3.33 0.15
library cache load lock 39 0 3 72 0.03 0.14
gc cr multi block request 37,008 0 3 0 25.45 0.14
IPC send completion sync 5,451 0 2 0 3.75 0.10
gc cr block 2-way 4,669 0 2 0 3.21 0.09
enq: PS - contention 3,183 33 1 0 2.19 0.06
gc cr grant 2-way 5,151 0 1 0 3.54 0.06
direct path read temp 1,722 0 1 1 1.18 0.05
gc current block 2-way 1,807 0 1 0 1.24 0.03
os thread startup 6 0 1 108 0.00 0.03
name-service call wait 12 0 1 47 0.01 0.03
PX Deq: Signal ACK RSG 2,046 50 0 0 1.41 0.02
log file switch completion 3 0 0 149 0.00 0.02
rdbms ipc reply 3,610 0 0 0 2.48 0.02
gc current grant 2-way 1,432 0 0 0 0.98 0.02
library cache pin 903 32 0 0 0.62 0.02
PX Deq: reap credit 35,815 100 0 0 24.63 0.01
log file sync 383 0 0 1 0.26 0.01
Disk file operations I/O 405 0 0 0 0.28 0.01
library cache lock 418 3 0 0 0.29 0.01
kfk: async disk IO 23,159 0 0 0 15.93 0.01
gc current block busy 4 0 0 35 0.00 0.01
gc current multi block request 1,206 0 0 0 0.83 0.01
ges message buffer allocation 38,526 0 0 0 26.50 0.00
enq: FB - contention 131 0 0 0 0.09 0.00
undo segment extension 8 100 0 6 0.01 0.00
CSS initialization 8 0 0 6 0.01 0.00
SQL*Net message to client 14,600 0 0 0 10.04 0.00
enq: HW - contention 96 0 0 0 0.07 0.00
CSS operation: action 8 0 0 4 0.01 0.00
gc cr block busy 33 0 0 1 0.02 0.00
latch free 30 0 0 1 0.02 0.00
enq: TM - contention 49 6 0 0 0.03 0.00
enq: JQ - contention 19 100 0 1 0.01 0.00
SQL*Net more data to client 666 0 0 0 0.46 0.00
asynch descriptor resize 3,179 100 0 0 2.19 0.00
latch: shared pool 3 0 0 3 0.00 0.00
CSS operation: query 24 0 0 0 0.02 0.00
PX Deq: Signal ACK EXT 72 0 0 0 0.05 0.00
KJC: Wait for msg sends to complete 269 0 0 0 0.19 0.00
latch: object queue header operation 4 0 0 1 0.00 0.00
gc cr block congested 5 0 0 0 0.00 0.00
utl_file I/O 11 0 0 0 0.01 0.00
enq: TO - contention 3 33 0 0 0.00 0.00
SQL*Net message from client 14,600 0 219,478 15033 10.04
jobq slave wait 7,726 100 3,856 499 5.31
PX Deq: Execution Msg 10,556 19 50 5 7.26
PX Deq: Execute Reply 2,946 31 27 9 2.03
PX Deq: Parse Reply 3,157 35 3 1 2.17
PX Deq: Join ACK 2,976 28 2 1 2.05
PX Deq Credit: send blkd 7 14 0 4 0.00
Back to Wait Events Statistics
Back to Top
Background Wait Events
* ordered by wait time desc, waits desc (idle events last)
* Only events with Total Wait Time (s) >= .001 are shown
* %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn % bg time
os thread startup 140 0 13 90 0.10 10.35
db file parallel write 8,233 0 6 1 5.66 5.08
log file parallel write 3,906 0 6 1 2.69 4.62
log file sequential read 350 0 5 16 0.24 4.49
control file sequential read 13,737 0 5 0 9.45 3.72
DFS lock handle 2,990 27 2 1 2.06 1.43
db file sequential read 921 0 2 2 0.63 1.39
SQL*Net break/reset to client 18 0 1 81 0.01 1.19
control file parallel write 2,455 0 1 1 1.69 1.12
ges lms sync during dynamic remastering and reconfig 24 100 1 50 0.02 0.98
library cache load lock 35 0 1 24 0.02 0.68
ASM file metadata operation 3,483 0 1 0 2.40 0.65
enq: CO - master slave det 1,203 100 1 0 0.83 0.46
kjbdrmcvtq lmon drm quiesce: ping completion 9 0 1 62 0.01 0.46
enq: WF - contention 11 0 0 35 0.01 0.31
CGS wait for IPC msg 32,702 100 0 0 22.49 0.19
gc object scan 28,788 100 0 0 19.80 0.15
row cache lock 535 0 0 0 0.37 0.14
library cache pin 370 55 0 0 0.25 0.12
ksxr poll remote instances 19,119 100 0 0 13.15 0.11
name-service call wait 6 0 0 19 0.00 0.10
gc current block 2-way 304 0 0 0 0.21 0.09
gc cr block 2-way 267 0 0 0 0.18 0.08
gc cr grant 2-way 355 0 0 0 0.24 0.08
ges LMON to get to FTDONE 3 100 0 24 0.00 0.06
enq: CF - contention 145 76 0 0 0.10 0.05
PX Deq: reap credit 8,842 100 0 0 6.08 0.05
reliable message 126 0 0 0 0.09 0.05
db file scattered read 19 0 0 3 0.01 0.05
library cache lock 162 1 0 0 0.11 0.04
latch: shared pool 2 0 0 27 0.00 0.04
Disk file operations I/O 504 0 0 0 0.35 0.04
gc current grant busy 148 0 0 0 0.10 0.04
gcs log flush sync 84 0 0 1 0.06 0.04
ges message buffer allocation 24,934 0 0 0 17.15 0.02
enq: CR - block range reuse ckpt 83 0 0 0 0.06 0.02
latch free 22 0 0 1 0.02 0.02
CSS operation: action 13 0 0 2 0.01 0.02
CSS initialization 4 0 0 6 0.00 0.02
direct path read 1 0 0 21 0.00 0.02
rdbms ipc reply 153 0 0 0 0.11 0.01
db file parallel read 2 0 0 8 0.00 0.01
direct path write 5 0 0 3 0.00 0.01
gc current multi block request 49 0 0 0 0.03 0.01
gc current block busy 5 0 0 2 0.00 0.01
enq: PS - contention 24 50 0 0 0.02 0.01
gc cr multi block request 54 0 0 0 0.04 0.01
ges generic event 1 100 0 10 0.00 0.01
gc current grant 2-way 35 0 0 0 0.02 0.01
kfk: async disk IO 183 0 0 0 0.13 0.01
Log archive I/O 3 0 0 2 0.00 0.01
gc buffer busy acquire 2 0 0 3 0.00 0.00
LGWR wait for redo copy 123 0 0 0 0.08 0.00
IPC send completion sync 18 0 0 0 0.01 0.00
enq: TA - contention 11 0 0 0 0.01 0.00
read by other session 2 0 0 2 0.00 0.00
enq: TM - contention 9 89 0 0 0.01 0.00
latch: ges resource hash list 135 0 0 0 0.09 0.00
PX Deq: Slave Session Stats 12 0 0 0 0.01 0.00
KJC: Wait for msg sends to complete 89 0 0 0 0.06 0.00
enq: TD - KTF dump entries 8 0 0 0 0.01 0.00
enq: US - contention 7 0 0 0 0.00 0.00
CSS operation: query 12 0 0 0 0.01 0.00
enq: TK - Auto Task Serialization 6 100 0 0 0.00 0.00
PX Deq: Signal ACK RSG 24 50 0 0 0.02 0.00
log file single write 6 0 0 0 0.00 0.00
enq: WL - contention 2 100 0 1 0.00 0.00
ADR block file read 13 0 0 0 0.01 0.00
ADR block file write 5 0 0 0 0.00 0.00
latch: object queue header operation 1 0 0 1 0.00 0.00
gc cr block busy 1 0 0 1 0.00 0.00
rdbms ipc message 103,276 67 126,259 1223 71.03
PX Idle Wait 6,467 67 12,719 1967 4.45
wait for unread message on broadcast channel 7,240 100 7,221 997 4.98
gcs remote message 218,809 84 7,213 33 150.49
DIAG idle wait 203,228 95 7,185 35 139.77
shared server idle wait 121 100 3,630 30000 0.08
ASM background timer 3,343 0 3,611 1080 2.30
Space Manager: slave idle wait 723 100 3,610 4993 0.50
heartbeat monitor sleep 722 100 3,610 5000 0.50
ges remote message 73,089 52 3,609 49 50.27
dispatcher timer 66 88 3,608 54660 0.05
pmon timer 1,474 82 3,607 2447 1.01
PING 1,487 19 3,607 2426 1.02
Streams AQ: qmn slave idle wait 125 0 3,594 28754 0.09
Streams AQ: qmn coordinator idle wait 250 50 3,594 14377 0.17
smon timer 18 50 3,505 194740 0.01
JOX Jit Process Sleep 73 100 976 13370 0.05
class slave wait 56 0 605 10806 0.04
KSV master wait 2,215 98 1 0 1.52
SQL*Net message from client 109 0 0 2 0.07
PX Deq: Parse Reply 27 44 0 1 0.02
PX Deq: Join ACK 30 40 0 1 0.02
PX Deq: Execute Reply 20 30 0 0 0.01
Streams AQ: RAC qmn coordinator idle wait 259 100 0 0 0.18
Back to Wait Events Statistics
Back to Top
Wait Event Histogram
* Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
* % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
* % of Waits: column heading of <=1s is truly <1024ms, >1s is truly >=1024ms
* Ordered by Event (idle events last)
% of Waits
Event Total Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
ADR block file read 13 100.0
ADR block file write 5 100.0
ADR file lock 6 100.0
ARCH wait for archivelog lock 3 100.0
ASM file metadata operation 3483 99.6 .1 .1 .2
CGS wait for IPC msg 32.7K 100.0
CSS initialization 12 50.0 50.0
CSS operation: action 21 28.6 9.5 61.9
CSS operation: query 36 86.1 5.6 8.3
DFS lock handle 6198 98.6 1.2 .1 .1
Disk file operations I/O 909 95.7 3.6 .7
IPC send completion sync 5469 99.9 .1 .0 .0
KJC: Wait for msg sends to complete 313 100.0
LGWR wait for redo copy 122 100.0
Log archive I/O 3 66.7 33.3
OJVM: Generic 36 55.6 44.4
PX Deq: Signal ACK EXT 72 98.6 1.4
PX Deq: Signal ACK RSG 2070 99.7 .0 .1 .0 .1
PX Deq: Slave Session Stats 2673 99.7 .2 .1 .0
PX Deq: reap credit 44.7K 100.0
SQL*Net break/reset to client 20 95.0 5.0
SQL*Net message to client 14.7K 100.0
SQL*Net more data from client 32 100.0
SQL*Net more data to client 689 100.0
asynch descriptor resize 3387 100.0
buffer busy waits 2 100.0
control file parallel write 2455 96.6 2.2 .6 .6 .1
control file sequential read 36K 99.4 .3 .1 .1 .1 .1 .0
db file parallel read 397 8.8 .8 5.5 12.6 17.4 46.3 8.6
db file parallel write 8233 85.4 10.3 2.3 1.4 .4 .1
db file scattered read 26.3K 79.2 1.5 8.2 10.5 .6 .1 .0
db file sequential read 28.4K 60.2 3.3 18.0 18.1 .3 .1 .0
db file single write 2 100.0
direct path read 2 50.0 50.0
direct path read temp 1722 95.8 2.8 .1 .5 .8 .1
direct path write 6 83.3 16.7
direct path write temp 4842 96.3 2.7 .5 .2 .0 .0 .2
enq: AF - task serialization 1 100.0
enq: CF - contention 145 99.3 .7
enq: CO - master slave det 1203 98.9 .8 .2
enq: CR - block range reuse ckpt 83 100.0
enq: DR - contention 2 100.0
enq: FB - contention 131 100.0
enq: HW - contention 97 100.0
enq: JQ - contention 19 89.5 10.5
enq: JS - job run lock - synchronize 3 100.0
enq: MD - contention 1 100.0
enq: MW - contention 2 100.0
enq: PS - contention 3207 99.5 .4 .1
enq: TA - contention 11 100.0
enq: TD - KTF dump entries 8 100.0
enq: TK - Auto Task Serialization 6 100.0
enq: TM - contention 58 100.0
enq: TO - contention 3 100.0
enq: TQ - DDL contention 1 100.0
enq: TS - contention 1 100.0
enq: UL - contention 1 100.0
enq: US - contention 7 100.0
enq: WF - contention 11 81.8 18.2
enq: WL - contention 2 50.0 50.0
gc buffer busy acquire 2 50.0 50.0
gc cr block 2-way 4934 99.9 .1 .0 .0
gc cr block busy 35 68.6 31.4
gc cr block congested 6 100.0
gc cr disk read 2 100.0
gc cr grant 2-way 4824 100.0 .0
gc cr grant congested 2 100.0
gc cr multi block request 37.1K 99.8 .2 .0 .0 .0 .0 .0
gc current block 2-way 2134 99.9 .0 .0
gc current block busy 7 14.3 14.3 14.3 28.6 28.6
gc current block congested 2 100.0
gc current grant 2-way 1337 99.9 .1
gc current grant busy 7123 99.2 .2 .2 .0 .0 .3 .1
gc current grant congested 2 100.0
gc current multi block request 1260 99.8 .2
gc object scan 28.8K 100.0
gcs log flush sync 65 95.4 3.1 1.5
ges LMON to get to FTDONE 3 100.0
ges generic event 1 100.0
ges inquiry response 2 100.0
ges lms sync during dynamic remastering and reconfig 24 16.7 29.2 54.2
ges message buffer allocation 63.1K 100.0
kfk: async disk IO 23.3K 100.0 .0 .0
kjbdrmcvtq lmon drm quiesce: ping completion 9 11.1 88.9
ksxr poll remote instances 19.1K 100.0
latch free 52 59.6 40.4
latch: call allocation 2 100.0
latch: gc element 1 100.0
latch: gcs resource hash 1 100.0
latch: ges resource hash list 135 100.0
latch: object queue header operation 5 40.0 40.0 20.0
latch: shared pool 5 40.0 20.0 20.0 20.0
library cache load lock 74 9.5 5.4 8.1 17.6 10.8 13.5 35.1
library cache lock 493 99.2 .4 .4
library cache pin 1186 98.4 .3 1.2 .1
library cache: mutex X 6 100.0
log file parallel write 3897 72.9 1.5 17.1 7.5 .6 .3 .1
log file sequential read 350 4.6 3.1 59.4 30.0 2.9
log file single write 6 100.0
log file switch completion 3 33.3 66.7
log file sync 385 90.4 3.6 4.7 .8 .5
name-service call wait 18 5.6 5.6 5.6 16.7 44.4 22.2
os thread startup 146 100.0
rdbms ipc reply 3763 99.7 .3
read by other session 2 50.0 50.0
reliable message 4565 99.7 .2 .0 .0 .1
row cache lock 2334 99.3 .2 .1 .1 .3
undo segment extension 8 50.0 37.5 12.5
utl_file I/O 11 100.0
ASM background timer 3343 57.0 .3 .1 .1 .1 21.1 21.4
DIAG idle wait 203.2K 3.4 .2 .4 18.0 41.4 14.8 21.8
JOX Jit Process Sleep 73 2.7 97.3
KSV master wait 2213 99.4 .1 .2 .3
PING 1487 81.0 19.0
PX Deq Credit: send blkd 7 57.1 14.3 14.3 14.3
PX Deq: Execute Reply 2966 59.8 .8 9.5 5.6 10.2 2.6 11.4
PX Deq: Execution Msg 10.6K 72.4 12.1 2.6 2.5 .1 5.6 4.6 .0
PX Deq: Join ACK 3006 77.9 22.1 .1
PX Deq: Parse Reply 3184 67.1 31.1 1.6 .2
PX Idle Wait 6466 .2 8.7 4.3 4.8 .3 .1 5.0 76.6
SQL*Net message from client 14.7K 72.4 2.8 .8 .5 .9 .4 2.8 19.3
Space Manager: slave idle wait 722 100.0
Streams AQ: RAC qmn coordinator idle wait 259 100.0
Streams AQ: qmn coordinator idle wait 250 50.0 50.0
Streams AQ: qmn slave idle wait 125 100.0
class slave wait 55 67.3 7.3 1.8 5.5 1.8 7.3 9.1
dispatcher timer 66 6.1 93.9
gcs remote message 218.6K 7.7 1.8 1.2 1.6 1.7 15.7 70.3
ges remote message 72.9K 29.7 5.1 2.7 2.2 1.5 4.0 54.7
heartbeat monitor sleep 722 100.0
jobq slave wait 7725 .1 .0 99.9
pmon timer 1474 18.4 81.6
rdbms ipc message 103.3K 20.7 2.7 1.5 1.3 .9 .7 40.7 31.6
shared server idle wait 121 100.0
smon timer 18 100.0
wait for unread message on broadcast channel 7238 .3 99.7
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (64 msec to 2 sec)
* Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
* Units for % of Total Waits: ms is milliseconds s is 1024 milliseconds (approximately 1 second)
* % of Total Waits: total waits for all wait classes, including Idle
* % of Total Waits: value of .0 indicates value was <.05%; value of null is truly 0
* Ordered by Event (only non-idle events are displayed)
% of Total Waits
Event Waits 64ms to 2s <32ms <64ms <1/8s <1/4s <1/2s <1s <2s >=2s
ASM file metadata operation 6 99.8 .1 .1
DFS lock handle 6 99.9 .1 .0
OJVM: Generic 16 55.6 2.8 41.7
PX Deq: Signal ACK RSG 3 99.9 .0 .1
PX Deq: Slave Session Stats 3 99.9 .0 .0 .0
SQL*Net break/reset to client 1 95.0 5.0
control file sequential read 1 100.0 .0
db file parallel read 34 91.4 8.6
db file scattered read 4 100.0 .0 .0
db file sequential read 6 100.0 .0 .0 .0
direct path write temp 11 99.8 .1 .1 .0
enq: WF - contention 2 81.8 18.2
gc cr block 2-way 1 100.0 .0
gc cr multi block request 1 100.0 .0
gc current block 2-way 1 100.0 .0
gc current block busy 2 71.4 28.6
gc current grant busy 8 99.9 .0 .1
ges lms sync during dynamic remastering and reconfig 13 45.8 20.8 33.3
kjbdrmcvtq lmon drm quiesce: ping completion 8 11.1 11.1 77.8
latch: shared pool 1 80.0 20.0
library cache load lock 26 64.9 14.9 12.2 4.1 4.1
log file parallel write 2 99.9 .0 .0
log file sequential read 10 97.1 2.0 .6 .3
log file switch completion 2 33.3 66.7
name-service call wait 4 77.8 22.2
os thread startup 146 100.0
reliable message 4 99.9 .0 .1
row cache lock 2 99.7 .0 .0 .3
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (4 sec to 2 min)
* Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
* Units for % of Total Waits: s is 1024 milliseconds (approximately 1 second) m is 64*1024 milliseconds (approximately 67 seconds or 1.1 minutes)
* % of Total Waits: total waits for all wait classes, including Idle
* % of Total Waits: value of .0 indicates value was <.05%; value of null is truly 0
* Ordered by Event (only non-idle events are displayed)
% of Total Waits
Event Waits 4s to 2m <2s <4s <8s <16s <32s < 1m < 2m >=2m
row cache lock 6 99.7 .3
Back to Wait Events Statistics
Back to Top
Wait Event Histogram Detail (4 min to 1 hr)
No data exists for this section of the report.
Back to Wait Events Statistics
Back to Top
Service Statistics
* ordered by DB Time
Service Name DB Time (s) DB CPU (s) Physical Reads (K) Logical Reads (K)
ubshost 1,934 1,744 445 73,633
SYS$USERS 105 45 1 404
SYS$BACKGROUND 0 0 1 128
ubshostXDB 0 0 0 0
Back to Wait Events Statistics
Back to Top
Service Wait Class Stats
* Wait Class info for services in the Service Statistics section.
* Total Waits and Time Waited displayed for the following wait classes: User I/O, Concurrency, Administrative, Network
* Time Waited (Wt Time) in seconds
Service Name User I/O Total Wts User I/O Wt Time Concurcy Total Wts Concurcy Wt Time Admin Total Wts Admin Wt Time Network Total Wts Network Wt Time
ubshost 60232 90 2644 4 0 0 13302 0
SYS$USERS 997 2 525 19 0 0 1973 0
SYS$BACKGROUND 1456 2 1258 14 0 0 0 0
I am not able to paste the whole awr report. I have paste some of the sections of awr report.
Please help.
Thanks and Regards, -
Active session Spike on Oracle RAC 11G R2 on HP UX
Dear Experts,
We need urgent help please, as we are facing very low performance in production database.
We are having oracle 11G RAC on HP Unix environment. Following is the ADDM report. Kindly check and please help me to figure it out the issue and resolve it at earliest.
---------Instance 1---------------
ADDM Report for Task 'TASK_36650'
Analysis Period
AWR snapshot range from 11634 to 11636.
Time period starts at 21-JUL-13 07.00.03 PM
Time period ends at 21-JUL-13 09.00.49 PM
Analysis Target
Database 'MCMSDRAC' with DB ID 2894940361.
Database version 11.2.0.1.0.
ADDM performed an analysis of instance mcmsdrac1, numbered 1 and hosted at
mcmsdbl1.
Activity During the Analysis Period
Total database time was 38466 seconds.
The average number of active sessions was 5.31.
Summary of Findings
Description Active Sessions Recommendations
Percent of Activity
1 CPU Usage 1.44 | 27.08 1
2 Interconnect Latency .07 | 1.33 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Findings and Recommendations
Finding 1: CPU Usage
Impact is 1.44 active sessions, 27.08% of total activity.
Host CPU was a bottleneck and the instance was consuming 99% of the host CPU.
All wait times will be inflated by wait for CPU.
Host CPU consumption was 99%.
Recommendation 1: Host Configuration
Estimated benefit is 1.44 active sessions, 27.08% of total activity.
Action
Consider adding more CPUs to the host or adding instances serving the
database on other hosts.
Action
Session CPU consumption was throttled by the Oracle Resource Manager.
Consider revising the resource plan that was active during the analysis
period.
Finding 2: Interconnect Latency
Impact is .07 active sessions, 1.33% of total activity.
Higher than expected latency of the cluster interconnect was responsible for
significant database time on this instance.
The instance was consuming 110 kilo bits per second of interconnect bandwidth.
20% of this interconnect bandwidth was used for global cache messaging, 21%
for parallel query messaging and 7% for database lock management.
The average latency for 8K interconnect messages was 42153 microseconds.
The instance is using the private interconnect device "lan2" with IP address
172.16.200.71 and source "Oracle Cluster Repository".
The device "lan2" was used for 100% of interconnect traffic and experienced 0
send or receive errors during the analysis period.
Recommendation 1: Host Configuration
Estimated benefit is .07 active sessions, 1.33% of total activity.
Action
Investigate cause of high network interconnect latency between database
instances. Oracle's recommended solution is to use a high speed
dedicated network.
Action
Check the configuration of the cluster interconnect. Check OS setup like
adapter setting, firmware and driver release. Check that the OS's socket
receive buffers are large enough to store an entire multiblock read. The
value of parameter "db_file_multiblock_read_count" may be decreased as a
workaround.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional Information
Miscellaneous Information
Wait class "Application" was not consuming significant database time.
Wait class "Cluster" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Concurrency" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
Wait class "Network" was not consuming significant database time.
Wait class "User I/O" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
Hard parsing of SQL statements was not consuming significant database time.
The database's maintenance windows were active during 100% of the analysis
period.
----------------Instance 2 --------------------
ADDM Report for Task 'TASK_36652'
Analysis Period
AWR snapshot range from 11634 to 11636.
Time period starts at 21-JUL-13 07.00.03 PM
Time period ends at 21-JUL-13 09.00.49 PM
Analysis Target
Database 'MCMSDRAC' with DB ID 2894940361.
Database version 11.2.0.1.0.
ADDM performed an analysis of instance mcmsdrac2, numbered 2 and hosted at
mcmsdbl2.
Activity During the Analysis Period
Total database time was 2898 seconds.
The average number of active sessions was .4.
Summary of Findings
Description Active Sessions Recommendations
Percent of Activity
1 Top SQL Statements .11 | 27.65 5
2 Interconnect Latency .1 | 24.15 1
3 Shared Pool Latches .09 | 22.42 1
4 PL/SQL Execution .06 | 14.39 2
5 Unusual "Other" Wait Event .03 | 8.73 4
6 Unusual "Other" Wait Event .03 | 6.42 3
7 Unusual "Other" Wait Event .03 | 6.29 6
8 Hard Parse .02 | 5.5 0
9 Soft Parse .02 | 3.86 2
10 Unusual "Other" Wait Event .01 | 3.75 4
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Findings and Recommendations
Finding 1: Top SQL Statements
Impact is .11 active sessions, 27.65% of total activity.
SQL statements consuming significant database time were found. These
statements offer a good opportunity for performance improvement.
Recommendation 1: SQL Tuning
Estimated benefit is .05 active sessions, 12.88% of total activity.
Action
Investigate the PL/SQL statement with SQL_ID "d1s02myktu19h" for
possible performance improvements. You can supplement the information
given here with an ASH report for this SQL_ID.
Related Object
SQL statement with SQL_ID d1s02myktu19h.
begin dbms_utility.validate(:1,:2,:3,:4); end;
Rationale
The SQL Tuning Advisor cannot operate on PL/SQL statements.
Rationale
Database time for this SQL was divided as follows: 13% for SQL
execution, 2% for parsing, 85% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "d1s02myktu19h" was executed 48 times and had
an average elapsed time of 7 seconds.
Rationale
Waiting for event "library cache pin" in wait class "Concurrency"
accounted for 70% of the database time spent in processing the SQL
statement with SQL_ID "d1s02myktu19h".
Rationale
Top level calls to execute the PL/SQL statement with SQL_ID
"63wt8yna5umd6" are responsible for 100% of the database time spent on
the PL/SQL statement with SQL_ID "d1s02myktu19h".
Related Object
SQL statement with SQL_ID 63wt8yna5umd6.
begin DBMS_UTILITY.COMPILE_SCHEMA( 'TPAUSER', FALSE ); end;
Recommendation 2: SQL Tuning
Estimated benefit is .02 active sessions, 4.55% of total activity.
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"fk3bh3t41101x".
Related Object
SQL statement with SQL_ID fk3bh3t41101x.
SELECT MEM.MEMBER_CODE ,MEM.E_NAME,Pol.Policy_no
,pol.date_from,pol.date_to,POL.E_NAME,MEM.SEX,(SYSDATE-MEM.BIRTH_DATE
) AGE,POL.SCHEME_NO FROM TPAUSER.MEMBERS MEM,TPAUSER.POLICY POL WHERE
POL.QUOTATION_NO=MEM.QUOTATION_NO AND POL.BRANCH_CODE=MEM.BRANCH_CODE
and endt_no=(select max(endt_no) from tpauser.members mm where
mm.member_code=mem.member_code AND mm.QUOTATION_NO=MEM.QUOTATION_NO)
and member_code like '%' || nvl(:1,null) ||'%' ORDER BY MEMBER_CODE
Rationale
The SQL spent 92% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "fk3bh3t41101x" was executed 14 times and had
an average elapsed time of 4.9 seconds.
Rationale
At least one execution of the statement ran in parallel.
Recommendation 3: SQL Tuning
Estimated benefit is .02 active sessions, 3.79% of total activity.
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"7mhjbjg9ntqf5".
Related Object
SQL statement with SQL_ID 7mhjbjg9ntqf5.
SELECT SUM(CNT) FROM (SELECT COUNT(PROC_CODE) CNT FROM
TPAUSER.TORBINY_PROCEDURE WHERE BRANCH_CODE = :B6 AND QUOTATION_NO =
:B5 AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND PR_EFFECTIVE_DATE<=
:B2 AND PROC_CODE = :B1 UNION SELECT COUNT(MED_CODE) CNT FROM
TPAUSER.TORBINY_MEDICINE WHERE BRANCH_CODE = :B6 AND QUOTATION_NO =
:B5 AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND M_EFFECTIVE_DATE<= :B2
AND MED_CODE = :B1 UNION SELECT COUNT(LAB_CODE) CNT FROM
TPAUSER.TORBINY_LAB WHERE BRANCH_CODE = :B6 AND QUOTATION_NO = :B5
AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND L_EFFECTIVE_DATE<= :B2 AND
LAB_CODE = :B1 )
Rationale
The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 0% for SQL execution,
0% for parsing, 100% for PL/SQL execution and 0% for Java execution.
Rationale
SQL statement with SQL_ID "7mhjbjg9ntqf5" was executed 31 times and had
an average elapsed time of 3.4 seconds.
Rationale
Top level calls to execute the SELECT statement with SQL_ID
"a11nzdnd91gsg" are responsible for 100% of the database time spent on
the SELECT statement with SQL_ID "7mhjbjg9ntqf5".
Related Object
SQL statement with SQL_ID a11nzdnd91gsg.
SELECT POLICY_NO,SCHEME_NO FROM TPAUSER.POLICY WHERE QUOTATION_NO
=:B1
Recommendation 4: SQL Tuning
Estimated benefit is .01 active sessions, 3.03% of total activity.
Action
Investigate the SELECT statement with SQL_ID "4uqs4jt7aca5s" for
possible performance improvements. You can supplement the information
given here with an ASH report for this SQL_ID.
Related Object
SQL statement with SQL_ID 4uqs4jt7aca5s.
SELECT DISTINCT USER_ID FROM GV$SESSION, USERS WHERE UPPER (USERNAME)
= UPPER (USER_ID) AND USERS.APPROVAL_CLAIM='VC' AND USER_ID=:B1
Rationale
The SQL spent only 0% of its database time on CPU, I/O and Cluster
waits. Therefore, the SQL Tuning Advisor is not applicable in this case.
Look at performance data for the SQL to find potential improvements.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "4uqs4jt7aca5s" was executed 261 times and had
an average elapsed time of 0.35 seconds.
Rationale
At least one execution of the statement ran in parallel.
Rationale
Top level calls to execute the PL/SQL statement with SQL_ID
"91vt043t78460" are responsible for 100% of the database time spent on
the SELECT statement with SQL_ID "4uqs4jt7aca5s".
Related Object
SQL statement with SQL_ID 91vt043t78460.
begin TPAUSER.RECEIVE_NEW_FAX_APRROVAL(:V00001,:V00002,:V00003,:V0000
4); end;
Recommendation 5: SQL Tuning
Estimated benefit is .01 active sessions, 3.03% of total activity.
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"7kt28fkc0yn5f".
Related Object
SQL statement with SQL_ID 7kt28fkc0yn5f.
SELECT COUNT(*) FROM TPAUSER.APPROVAL_MASTER WHERE APPROVAL_STATUS IS
NULL AND (UPPER(CODED) = UPPER(:B1 ) OR UPPER(PROCESSED_BY) =
UPPER(:B1 ))
Rationale
The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "7kt28fkc0yn5f" was executed 1034 times and
had an average elapsed time of 0.063 seconds.
Rationale
Top level calls to execute the PL/SQL statement with SQL_ID
"91vt043t78460" are responsible for 100% of the database time spent on
the SELECT statement with SQL_ID "7kt28fkc0yn5f".
Related Object
SQL statement with SQL_ID 91vt043t78460.
begin TPAUSER.RECEIVE_NEW_FAX_APRROVAL(:V00001,:V00002,:V00003,:V0000
4); end;
Finding 2: Interconnect Latency
Impact is .1 active sessions, 24.15% of total activity.
Higher than expected latency of the cluster interconnect was responsible for
significant database time on this instance.
The instance was consuming 128 kilo bits per second of interconnect bandwidth.
17% of this interconnect bandwidth was used for global cache messaging, 6% for
parallel query messaging and 8% for database lock management.
The average latency for 8K interconnect messages was 41863 microseconds.
The instance is using the private interconnect device "lan2" with IP address
172.16.200.72 and source "Oracle Cluster Repository".
The device "lan2" was used for 100% of interconnect traffic and experienced 0
send or receive errors during the analysis period.
Recommendation 1: Host Configuration
Estimated benefit is .1 active sessions, 24.15% of total activity.
Action
Investigate cause of high network interconnect latency between database
instances. Oracle's recommended solution is to use a high speed
dedicated network.
Action
Check the configuration of the cluster interconnect. Check OS setup like
adapter setting, firmware and driver release. Check that the OS's socket
receive buffers are large enough to store an entire multiblock read. The
value of parameter "db_file_multiblock_read_count" may be decreased as a
workaround.
Symptoms That Led to the Finding:
Inter-instance messaging was consuming significant database time on this
instance.
Impact is .06 active sessions, 14.23% of total activity.
Wait class "Cluster" was consuming significant database time.
Impact is .06 active sessions, 14.23% of total activity.
Finding 3: Shared Pool Latches
Impact is .09 active sessions, 22.42% of total activity.
Contention for latches related to the shared pool was consuming significant
database time.
Waits for "library cache lock" amounted to 5% of database time.
Waits for "library cache pin" amounted to 17% of database time.
Recommendation 1: Application Analysis
Estimated benefit is .09 active sessions, 22.42% of total activity.
Action
Investigate the cause for latch contention using the given blocking
sessions or modules.
Rationale
The session with ID 17 and serial number 15595 in instance number 1 was
the blocking session responsible for 34% of this recommendation's
benefit.
Symptoms That Led to the Finding:
Wait class "Concurrency" was consuming significant database time.
Impact is .1 active sessions, 24.96% of total activity.
Finding 4: PL/SQL Execution
Impact is .06 active sessions, 14.39% of total activity.
PL/SQL execution consumed significant database time.
Recommendation 1: SQL Tuning
Estimated benefit is .05 active sessions, 12.5% of total activity.
Action
Tune the entry point PL/SQL "SYS.DBMS_UTILITY.COMPILE_SCHEMA" of type
"PACKAGE" and ID 6019. Refer to the PL/SQL documentation for addition
information.
Rationale
318 seconds spent in executing PL/SQL "SYS.DBMS_UTILITY.VALIDATE#2" of
type "PACKAGE" and ID 6019.
Recommendation 2: SQL Tuning
Estimated benefit is .01 active sessions, 1.89% of total activity.
Action
Tune the entry point PL/SQL
"SYSMAN.EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS" of type "PACKAGE" and
ID 68654. Refer to the PL/SQL documentation for addition information.
Finding 5: Unusual "Other" Wait Event
Impact is .03 active sessions, 8.73% of total activity.
Wait event "DFS lock handle" in wait class "Other" was consuming significant
database time.
Recommendation 1: Application Analysis
Estimated benefit is .03 active sessions, 8.73% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits. Refer to
Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .03 active sessions, 8.27% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits in Service
"mcmsdrac".
Recommendation 3: Application Analysis
Estimated benefit is .02 active sessions, 5.05% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits in Module "TOAD
9.7.2.5".
Recommendation 4: Application Analysis
Estimated benefit is .01 active sessions, 3.21% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits in Module
"toad.exe".
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
Finding 6: Unusual "Other" Wait Event
Impact is .03 active sessions, 6.42% of total activity.
Wait event "reliable message" in wait class "Other" was consuming significant
database time.
Recommendation 1: Application Analysis
Estimated benefit is .03 active sessions, 6.42% of total activity.
Action
Investigate the cause for high "reliable message" waits. Refer to
Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .03 active sessions, 6.42% of total activity.
Action
Investigate the cause for high "reliable message" waits in Service
"mcmsdrac".
Recommendation 3: Application Analysis
Estimated benefit is .02 active sessions, 4.13% of total activity.
Action
Investigate the cause for high "reliable message" waits in Module "TOAD
9.7.2.5".
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
Finding 7: Unusual "Other" Wait Event
Impact is .03 active sessions, 6.29% of total activity.
Wait event "enq: PS - contention" in wait class "Other" was consuming
significant database time.
Recommendation 1: Application Analysis
Estimated benefit is .03 active sessions, 6.29% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits. Refer to
Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .02 active sessions, 6.02% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits in Service
"mcmsdrac".
Recommendation 3: Application Analysis
Estimated benefit is .02 active sessions, 4.93% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits with
P1,P2,P3 ("name|mode, instance, slave ID") values "1347616774", "1" and
"3599" respectively.
Recommendation 4: Application Analysis
Estimated benefit is .01 active sessions, 2.74% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits in Module
"Inbox Reader_92.exe".
Recommendation 5: Application Analysis
Estimated benefit is .01 active sessions, 2.74% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits in Module
"TOAD 9.7.2.5".
Recommendation 6: Application Analysis
Estimated benefit is .01 active sessions, 1.37% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits with
P1,P2,P3 ("name|mode, instance, slave ID") values "1347616774", "1" and
"3598" respectively.
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
Finding 8: Hard Parse
Impact is .02 active sessions, 5.5% of total activity.
Hard parsing of SQL statements was consuming significant database time.
Hard parses due to cursor environment mismatch were not consuming significant
database time.
Hard parsing SQL statements that encountered parse errors was not consuming
significant database time.
Hard parses due to literal usage and cursor invalidation were not consuming
significant database time.
The Oracle instance memory (SGA and PGA) was adequately sized.
No recommendations are available.
Symptoms That Led to the Finding:
Contention for latches related to the shared pool was consuming
significant database time.
Impact is .09 active sessions, 22.42% of total activity.
Wait class "Concurrency" was consuming significant database time.
Impact is .1 active sessions, 24.96% of total activity.
Finding 9: Soft Parse
Impact is .02 active sessions, 3.86% of total activity.
Soft parsing of SQL statements was consuming significant database time.
Recommendation 1: Application Analysis
Estimated benefit is .02 active sessions, 3.86% of total activity.
Action
Investigate application logic to keep open the frequently used cursors.
Note that cursors are closed by both cursor close calls and session
disconnects.
Recommendation 2: Database Configuration
Estimated benefit is .02 active sessions, 3.86% of total activity.
Action
Consider increasing the session cursor cache size by increasing the
value of parameter "session_cached_cursors".
Rationale
The value of parameter "session_cached_cursors" was "100" during the
analysis period.
Symptoms That Led to the Finding:
Contention for latches related to the shared pool was consuming
significant database time.
Impact is .09 active sessions, 22.42% of total activity.
Wait class "Concurrency" was consuming significant database time.
Impact is .1 active sessions, 24.96% of total activity.
Finding 10: Unusual "Other" Wait Event
Impact is .01 active sessions, 3.75% of total activity.
Wait event "IPC send completion sync" in wait class "Other" was consuming
significant database time.
Recommendation 1: Application Analysis
Estimated benefit is .01 active sessions, 3.75% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits. Refer
to Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .01 active sessions, 3.75% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits with P1
("send count") value "1".
Recommendation 3: Application Analysis
Estimated benefit is .01 active sessions, 2.59% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits in
Service "mcmsdrac".
Recommendation 4: Application Analysis
Estimated benefit is .01 active sessions, 1.73% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits in
Module "TOAD 9.7.2.5".
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional Information
Miscellaneous Information
Wait class "Application" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
CPU was not a bottleneck for the instance.
Wait class "Network" was not consuming significant database time.
Wait class "User I/O" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
The database's maintenance windows were active during 100% of the analysis
period.
Please help.Hello experts...
Please do the needful... It's really very urgent.
Thanks,
Syed -
Issues while installing ORACLE 10g R2 RAC on RHEL 5.3
i am installing oracle 10g R2 RAC on RHEL 5.3 in a test environment. my aim was to install using ASM.
i went through the pre-requisites ok before beginning with the installation.
since RHEL 5.3 doesnt support raw devices i used
http://www.idevelopment.info/data/Unix/Linux/LINUX_ConnectingToAniSCSITargetWithOpen-iSCSIInitiatorUsingLinux.shtml#Configure%20iSCSI%20Initiator%20and%20New%20Volume
udev to configure shared storage. my storage is on openfiler as a iSCSI target.
fdisk -l on both nodes shows the shared drives.
[root@node1 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 2053 16386300 8e Linux LVM
/dev/sda3 2054 2372 2562367+ 82 Linux swap / Solaris
Disk /dev/sdd: 10.5 GB, 10502537216 bytes
64 heads, 32 sectors/track, 10016 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 10016 10256368 83 Linux
Disk /dev/sdb: 10.5 GB, 10536091648 bytes
64 heads, 32 sectors/track, 10048 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 10048 10289136 83 Linux
Disk /dev/sdc: 10.5 GB, 10536091648 bytes
64 heads, 32 sectors/track, 10048 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 10048 10289136 83 Linux
Disk /dev/sdf: 10.5 GB, 10569646080 bytes
64 heads, 32 sectors/track, 10080 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 10080 10321904 83 Linux
Disk /dev/sde: 10.5 GB, 10502537216 bytes
64 heads, 32 sectors/track, 10016 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 10016 10256368 83 Linux
[root@node1 ~]# ls -l /dev/iscsi/arpl*
/dev/iscsi/arpl1:
total 0
lrwxrwxrwx 1 oracle oinstall 9 Jun 30 12:58 part -> ../../sde
lrwxrwxrwx 1 oracle oinstall 10 Jun 30 12:58 part1 -> ../../sde1
/dev/iscsi/arpl2:
total 0
lrwxrwxrwx 1 oracle oinstall 9 Jun 30 12:58 part -> ../../sdd
lrwxrwxrwx 1 oracle oinstall 10 Jun 30 12:58 part1 -> ../../sdd1
/dev/iscsi/arpl3:
total 0
lrwxrwxrwx 1 oracle oinstall 9 Jun 30 12:58 part -> ../../sdb
lrwxrwxrwx 1 oracle oinstall 10 Jun 30 12:58 part1 -> ../../sdb1
/dev/iscsi/arpl4:
total 0
lrwxrwxrwx 1 oracle oinstall 9 Jun 30 12:58 part -> ../../sdc
lrwxrwxrwx 1 oracle oinstall 10 Jun 30 12:58 part1 -> ../../sdc1
/dev/iscsi/arpl5:
total 0
lrwxrwxrwx 1 oracle oinstall 9 Jun 30 12:58 part -> ../../sdf
lrwxrwxrwx 1 oracle oinstall 10 Jun 30 12:58 part1 -> ../../sdf1
[root@node1 ~]# configuring udev results in the drives being mapped to persistent drive names eg. /dev/iscsi/arpl1/part1
still when i enter the name of the raw device which i have created i get the following error. please help
Image: !http://img91.imageshack.us/img91/7448/oracle.png!did you check this
http://download-west.oracle.com/docs/cd/B19306_01/install.102/b14203/storage.htm#BABBHECD
http://download.oracle.com/docs/cd/B28359_01/install.111/b28263/storage.htm#CDEBFDEH
also before you do what is suggested on the url posted by Mufalani,
it seems that you should do the following:
Check existence of udev raw mapping rule file /etc/udev/rules.d/60-raw.rules
On RH/EL5, udev raw mapping rule file /etc/udev/rules.d/60-raw.rules should exist by default as part of the util-linux package, for example:
# ls /etc/udev/rules.d/60-raw.rules
/etc/udev/rules.d/60-raw.rulesCreate a custom udev raw mapping rule file, /etc/udev/rules.d/61-oracleraw.rules say, for example:
# touch /etc/udev/rules.d/61-oracleraw.rules3b. Add udev raw binding rules to /etc/udev/rules.d/61-oracleraw.rules file
Add the udev raw binding rules to the /etc/udev/rules.d/61-oracleraw.rules file, for example:
# cat /etc/udev/rules.d/61-oracleraw.rules
# Raw bind to Oracle Clusterware devices
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id", RESULT=="360a98000686f6959684a453333524174", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id", RESULT=="360a98000686f6959684a453333524179", RUN+="/bin/raw /dev/raw/raw2 %N"If migrating to RHEL5/OEL5 from an earlier Linux version that relied on /etc/sysconfig/rawdevices file for persistent raw device bindings, the following script can be used as a basis for generating your RHEL5/OEL5 udev rules. Note, the script does not factor device name persistency.
#!/bin/bash
grep -v "^ *#" /etc/sysconfig/rawdevices | grep -v "^$" | while read dev major minor ; do
if [ -z "$minor" ]; then
echo "ACTION==\"add\", KERNEL==\"${major##/dev/}\", RUN+=\"/bin/raw $dev %N\""
else
echo "ACTION==\"add\", ENV{MAJOR}==\"$major\", ENV{MINOR}==\"$minor\", RUN+=\"/bin/raw $dev %M %m\""
fi
done -
Hi,
Oracle RAC database 10.2.0.3/RedHat4 with 2 nodes.
In the begining we had an error ORA-600[12803] so only sys can connect to database I find the note 1026653.6 this note said that we need to create AUDSES$ sequence but befor that we have to restart the database.
When we stop the datanbase we had another ORA-600 and it's impossible to start it!!
Here is a coppy of our alert file:
Picked latch-free SCN scheme 2
Autotune of undo retention is turned on.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.3.0.
System parameters with non-default values:
processes = 300
sessions = 335
sga_max_size = 524288000
__shared_pool_size = 310378496
__large_pool_size = 4194304
__java_pool_size = 8388608
__streams_pool_size = 8388608
spfile = +DATA/osista/spfileosista.ora
nls_language = FRENCH
nls_territory = FRANCE
nls_length_semantics = CHAR
sga_target = 524288000
control_files = DATA/osista/controlfile/control01.ctl, DATA/osista/controlfile/control02.ctl
db_block_size = 8192
__db_cache_size = 184549376
compatible = 10.2.0.3.0
log_archive_dest_1 = LOCATION=USE_DB_RECOVERY_FILE_DEST
db_file_multiblock_read_count= 16
cluster_database = TRUE
cluster_database_instances= 2
db_create_file_dest = +DATA
db_recovery_file_dest = +FLASH
db_recovery_file_dest_size= 68543315968
thread = 2
instance_number = 2
undo_management = AUTO
undo_tablespace = UNDOTBS2
undo_retention = 29880
remote_login_passwordfile= EXCLUSIVE
db_domain =
dispatchers = (PROTOCOL=TCP) (SERVICE=OSISTAXDB)
local_listener = (address=(protocol=tcp)(port=1521)(host=132.147.160.243))
remote_listener = LISTENERS_OSISTA
job_queue_processes = 10
background_dump_dest = /oracle/product/admin/OSISTA/bdump
user_dump_dest = /oracle/product/admin/OSISTA/udump
core_dump_dest = /oracle/product/admin/OSISTA/cdump
audit_file_dest = /oracle/product/admin/OSISTA/adump
db_name = OSISTA
open_cursors = 300
pga_aggregate_target = 104857600
aq_tm_processes = 1
Cluster communication is configured to use the following interface(s) for this instance
172.16.0.2
Wed Jun 13 11:04:30 2012
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=8560
DIAG started with pid=3, OS id=8562
PSP0 started with pid=4, OS id=8566
LMON started with pid=5, OS id=8570
LMD0 started with pid=6, OS id=8574
LMS0 started with pid=7, OS id=8576
LMS1 started with pid=8, OS id=8580
MMAN started with pid=9, OS id=8584
DBW0 started with pid=10, OS id=8586
LGWR started with pid=11, OS id=8588
CKPT started with pid=12, OS id=8590
SMON started with pid=13, OS id=8592
RECO started with pid=14, OS id=8594
CJQ0 started with pid=15, OS id=8596
MMON started with pid=16, OS id=8598
Wed Jun 13 11:04:31 2012
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
MMNL started with pid=17, OS id=8600
Wed Jun 13 11:04:31 2012
starting up 1 shared server(s) ...
Wed Jun 13 11:04:31 2012
lmon registered with NM - instance id 2 (internal mem no 1)
Wed Jun 13 11:04:31 2012
Reconfiguration started (old inc 0, new inc 2)
List of nodes:
1
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Wed Jun 13 11:04:31 2012
LMS 0: 0 GCS shadows cancelled, 0 closed
Wed Jun 13 11:04:31 2012
LMS 1: 0 GCS shadows cancelled, 0 closed
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Post SMON to start 1st pass IR
Wed Jun 13 11:04:31 2012
LMS 0: 0 GCS shadows traversed, 0 replayed
Wed Jun 13 11:04:31 2012
LMS 1: 0 GCS shadows traversed, 0 replayed
Wed Jun 13 11:04:31 2012
Submitted all GCS remote-cache requests
Fix write in gcs resources
Reconfiguration complete
LCK0 started with pid=20, OS id=8877
Wed Jun 13 11:04:43 2012
alter database mount
Wed Jun 13 11:04:43 2012
This instance was first to mount
Wed Jun 13 11:04:43 2012
Starting background process ASMB
ASMB started with pid=25, OS id=10068
Starting background process RBAL
RBAL started with pid=26, OS id=10072
Wed Jun 13 11:04:47 2012
SUCCESS: diskgroup DATA was mounted
Wed Jun 13 11:04:51 2012
Setting recovery target incarnation to 1
Wed Jun 13 11:04:52 2012
Successful mount of redo thread 2, with mount id 3005749259
Wed Jun 13 11:04:52 2012
Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
Completed: alter database mount
Wed Jun 13 11:05:06 2012
alter database open
Wed Jun 13 11:05:06 2012
This instance was first to open
Wed Jun 13 11:05:06 2012
Beginning crash recovery of 1 threads
parallel recovery started with 2 processes
Wed Jun 13 11:05:07 2012
Started redo scan
Wed Jun 13 11:05:07 2012
Completed redo scan
61 redo blocks read, 4 data blocks need recovery
Wed Jun 13 11:05:07 2012
Started redo application at
Thread 1: logseq 7924, block 3, scn 506098125
Wed Jun 13 11:05:07 2012
Recovery of Online Redo Log: Thread 1 Group 2 Seq 7924 Reading mem 0
Mem# 0: +DATA/osista/onlinelog/group_2.372.742132543
Wed Jun 13 11:05:07 2012
Completed redo application
Wed Jun 13 11:05:07 2012
Completed crash recovery at
Thread 1: logseq 7924, block 64, scn 506118186
4 data blocks read, 4 data blocks written, 61 redo blocks read
Switch log for thread 1 to sequence 7925
Picked broadcast on commit scheme to generate SCNs
db_recovery_file_dest_size of 65368 MB is 0.61% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
SUCCESS: diskgroup FLASH was mounted
SUCCESS: diskgroup FLASH was dismounted
Thread 1 advanced to log sequence 7926
SUCCESS: diskgroup FLASH was mounted
SUCCESS: diskgroup FLASH was dismounted
Thread 1 advanced to log sequence 7927
Wed Jun 13 11:05:11 2012
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=31, OS id=12747
Wed Jun 13 11:05:11 2012
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC1 started with pid=32, OS id=12749
Wed Jun 13 11:05:12 2012
Thread 2 opened at log sequence 7176
Current log# 4 seq# 7176 mem# 0: +DATA/osista/onlinelog/group_4.289.742134597
Wed Jun 13 11:05:12 2012
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
Wed Jun 13 11:05:12 2012
Successful open of redo thread 2
Wed Jun 13 11:05:12 2012
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Wed Jun 13 11:05:12 2012
ARC0: Becoming the heartbeat ARCH
Wed Jun 13 11:05:12 2012
SMON: enabling cache recovery
Wed Jun 13 11:05:15 2012
Successfully onlined Undo Tablespace 20.
Wed Jun 13 11:05:15 2012
SMON: enabling tx recovery
Wed Jun 13 11:05:15 2012
Database Characterset is AL32UTF8
Wed Jun 13 11:05:16 2012
Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_9174.trc:
ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
Wed Jun 13 11:05:16 2012
Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_9174.trc:
ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
Error 600 happened during db open, shutting down database
USER: terminating instance due to error 600
Instance terminated by USER, pid = 9174
ORA-1092 signalled during: alter database open...
Wed Jun 13 11:06:16 2012
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Interface type 1 eth0 172.16.0.0 configured from OCR for use as a cluster interconnect
Interface type 1 bond0 132.147.160.0 configured from OCR for use as a public interface
Picked latch-free SCN scheme 2
Autotune of undo retention is turned on.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.3.0.
System parameters with non-default values:
processes = 300
sessions = 335
sga_max_size = 524288000
__shared_pool_size = 314572800
__large_pool_size = 4194304
__java_pool_size = 8388608
__streams_pool_size = 8388608
spfile = +DATA/osista/spfileosista.ora
nls_language = FRENCH
nls_territory = FRANCE
nls_length_semantics = CHAR
sga_target = 524288000
control_files = DATA/osista/controlfile/control01.ctl, DATA/osista/controlfile/control02.ctl
db_block_size = 8192
__db_cache_size = 180355072
compatible = 10.2.0.3.0
log_archive_dest_1 = LOCATION=USE_DB_RECOVERY_FILE_DEST
db_file_multiblock_read_count= 16
cluster_database = TRUE
cluster_database_instances= 2
db_create_file_dest = +DATA
db_recovery_file_dest = +FLASH
db_recovery_file_dest_size= 68543315968
thread = 2
instance_number = 2
undo_management = AUTO
undo_tablespace = UNDOTBS2
undo_retention = 29880
remote_login_passwordfile= EXCLUSIVE
db_domain =
dispatchers = (PROTOCOL=TCP) (SERVICE=OSISTAXDB)
local_listener = (address=(protocol=tcp)(port=1521)(host=132.147.160.243))
remote_listener = LISTENERS_OSISTA
job_queue_processes = 10
background_dump_dest = /oracle/product/admin/OSISTA/bdump
user_dump_dest = /oracle/product/admin/OSISTA/udump
core_dump_dest = /oracle/product/admin/OSISTA/cdump
audit_file_dest = /oracle/product/admin/OSISTA/adump
db_name = OSISTA
open_cursors = 300
pga_aggregate_target = 104857600
aq_tm_processes = 1
Cluster communication is configured to use the following interface(s) for this instance
172.16.0.2
Wed Jun 13 11:06:16 2012
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=18682
DIAG started with pid=3, OS id=18684
PSP0 started with pid=4, OS id=18695
LMON started with pid=5, OS id=18704
LMD0 started with pid=6, OS id=18721
LMS0 started with pid=7, OS id=18735
LMS1 started with pid=8, OS id=18753
MMAN started with pid=9, OS id=18767
DBW0 started with pid=10, OS id=18788
LGWR started with pid=11, OS id=18796
CKPT started with pid=12, OS id=18799
SMON started with pid=13, OS id=18801
RECO started with pid=14, OS id=18803
CJQ0 started with pid=15, OS id=18805
MMON started with pid=16, OS id=18807
Wed Jun 13 11:06:17 2012
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
MMNL started with pid=17, OS id=18809
Wed Jun 13 11:06:17 2012
starting up 1 shared server(s) ...
Wed Jun 13 11:06:17 2012
lmon registered with NM - instance id 2 (internal mem no 1)
Wed Jun 13 11:06:17 2012
Reconfiguration started (old inc 0, new inc 2)
List of nodes:
1
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Wed Jun 13 11:06:18 2012
LMS 0: 0 GCS shadows cancelled, 0 closed
Wed Jun 13 11:06:18 2012
LMS 1: 0 GCS shadows cancelled, 0 closed
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Post SMON to start 1st pass IR
Wed Jun 13 11:06:18 2012
LMS 0: 0 GCS shadows traversed, 0 replayed
Wed Jun 13 11:06:18 2012
LMS 1: 0 GCS shadows traversed, 0 replayed
Wed Jun 13 11:06:18 2012
Submitted all GCS remote-cache requests
Fix write in gcs resources
Reconfiguration complete
LCK0 started with pid=20, OS id=18816
Wed Jun 13 11:06:18 2012
ALTER DATABASE MOUNT
Wed Jun 13 11:06:18 2012
This instance was first to mount
Wed Jun 13 11:06:18 2012
Reconfiguration started (old inc 2, new inc 4)
List of nodes:
0 1
Wed Jun 13 11:06:18 2012
Starting background process ASMB
Wed Jun 13 11:06:18 2012
Global Resource Directory frozen
Communication channels reestablished
ASMB started with pid=22, OS id=18913
Starting background process RBAL
* domain 0 valid = 0 according to instance 0
Wed Jun 13 11:06:18 2012
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Wed Jun 13 11:06:18 2012
LMS 0: 0 GCS shadows cancelled, 0 closed
Wed Jun 13 11:06:18 2012
LMS 1: 0 GCS shadows cancelled, 0 closed
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Wed Jun 13 11:06:18 2012
LMS 0: 0 GCS shadows traversed, 0 replayed
Wed Jun 13 11:06:18 2012
LMS 1: 0 GCS shadows traversed, 0 replayed
Wed Jun 13 11:06:18 2012
Submitted all GCS remote-cache requests
Fix write in gcs resources
RBAL started with pid=23, OS id=18917
Reconfiguration complete
Wed Jun 13 11:06:22 2012
SUCCESS: diskgroup DATA was mounted
Wed Jun 13 11:06:26 2012
Setting recovery target incarnation to 1
Wed Jun 13 11:06:26 2012
Successful mount of redo thread 2, with mount id 3005703530
Wed Jun 13 11:06:26 2012
Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
Completed: ALTER DATABASE MOUNT
Wed Jun 13 11:06:27 2012
ALTER DATABASE OPEN
This instance was first to open
Wed Jun 13 11:06:27 2012
Beginning crash recovery of 1 threads
parallel recovery started with 2 processes
Wed Jun 13 11:06:27 2012
Started redo scan
Wed Jun 13 11:06:27 2012
Completed redo scan
61 redo blocks read, 4 data blocks need recovery
Wed Jun 13 11:06:28 2012
Started redo application at
Thread 2: logseq 7176, block 3
Wed Jun 13 11:06:28 2012
Recovery of Online Redo Log: Thread 2 Group 4 Seq 7176 Reading mem 0
Mem# 0: +DATA/osista/onlinelog/group_4.289.742134597
Wed Jun 13 11:06:28 2012
Completed redo application
Wed Jun 13 11:06:28 2012
Completed crash recovery at
Thread 2: logseq 7176, block 64, scn 506138248
4 data blocks read, 4 data blocks written, 61 redo blocks read
Picked broadcast on commit scheme to generate SCNs
Wed Jun 13 11:06:28 2012
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=28, OS id=19692
Wed Jun 13 11:06:28 2012
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC1 started with pid=29, OS id=19695
Wed Jun 13 11:06:28 2012
Thread 2 advanced to log sequence 7177
Thread 2 opened at log sequence 7177
Current log# 3 seq# 7177 mem# 0: +DATA/osista/onlinelog/group_3.291.742134597
Successful open of redo thread 2
Wed Jun 13 11:06:28 2012
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Wed Jun 13 11:06:28 2012
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
Wed Jun 13 11:06:28 2012
ARC1: Becoming the heartbeat ARCH
Wed Jun 13 11:06:28 2012
SMON: enabling cache recovery
Wed Jun 13 11:06:28 2012
db_recovery_file_dest_size of 65368 MB is 0.61% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
SUCCESS: diskgroup FLASH was mounted
SUCCESS: diskgroup FLASH was dismounted
Wed Jun 13 11:06:31 2012
Successfully onlined Undo Tablespace 20.
Wed Jun 13 11:06:31 2012
SMON: enabling tx recovery
Wed Jun 13 11:06:31 2012
Database Characterset is AL32UTF8
Wed Jun 13 11:06:31 2012
Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_19596.trc:
ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
Wed Jun 13 11:06:32 2012
Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_19596.trc:
ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
Error 600 happened during db open, shutting down database
USER: terminating instance due to error 600
Instance terminated by USER, pid = 19596
ORA-1092 signalled during: ALTER DATABASE OPEN...
Wed Jun 13 11:11:35 2012
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Interface type 1 eth0 172.16.0.0 configured from OCR for use as a cluster interconnect
Interface type 1 bond0 132.147.160.0 configured from OCR for use as a public interface
Picked latch-free SCN scheme 2
Autotune of undo retention is turned on.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.3.0.
System parameters with non-default values:
processes = 300
sessions = 335
sga_max_size = 524288000
__shared_pool_size = 318767104
__large_pool_size = 4194304
__java_pool_size = 8388608
__streams_pool_size = 8388608
spfile = +DATA/osista/spfileosista.ora
nls_language = FRENCH
nls_territory = FRANCE
nls_length_semantics = CHAR
sga_target = 524288000
control_files = DATA/osista/controlfile/control01.ctl, DATA/osista/controlfile/control02.ctl
db_block_size = 8192
__db_cache_size = 176160768
compatible = 10.2.0.3.0
log_archive_dest_1 = LOCATION=USE_DB_RECOVERY_FILE_DEST
db_file_multiblock_read_count= 16
cluster_database = TRUE
cluster_database_instances= 2
db_create_file_dest = +DATA
db_recovery_file_dest = +FLASH
db_recovery_file_dest_size= 68543315968
thread = 2
instance_number = 2
undo_management = AUTO
undo_tablespace = UNDOTBS2
undo_retention = 29880
remote_login_passwordfile= EXCLUSIVE
db_domain =
dispatchers = (PROTOCOL=TCP) (SERVICE=OSISTAXDB)
local_listener = (address=(protocol=tcp)(port=1521)(host=132.147.160.243))
remote_listener = LISTENERS_OSISTA
job_queue_processes = 10
background_dump_dest = /oracle/product/admin/OSISTA/bdump
user_dump_dest = /oracle/product/admin/OSISTA/udump
core_dump_dest = /oracle/product/admin/OSISTA/cdump
audit_file_dest = /oracle/product/admin/OSISTA/adump
db_name = OSISTA
open_cursors = 300
pga_aggregate_target = 104857600
aq_tm_processes = 1
Cluster communication is configured to use the following interface(s) for this instance
172.16.0.2
Wed Jun 13 11:11:35 2012
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=16101
DIAG started with pid=3, OS id=16103
PSP0 started with pid=4, OS id=16105
LMON started with pid=5, OS id=16107
LMD0 started with pid=6, OS id=16110
LMS0 started with pid=7, OS id=16112
LMS1 started with pid=8, OS id=16116
MMAN started with pid=9, OS id=16120
DBW0 started with pid=10, OS id=16132
LGWR started with pid=11, OS id=16148
CKPT started with pid=12, OS id=16169
SMON started with pid=13, OS id=16185
RECO started with pid=14, OS id=16203
CJQ0 started with pid=15, OS id=16219
MMON started with pid=16, OS id=16227
Wed Jun 13 11:11:36 2012
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
MMNL started with pid=17, OS id=16229
Wed Jun 13 11:11:36 2012
starting up 1 shared server(s) ...
Wed Jun 13 11:11:36 2012
lmon registered with NM - instance id 2 (internal mem no 1)
Wed Jun 13 11:11:36 2012
Reconfiguration started (old inc 0, new inc 2)
List of nodes:
1
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Wed Jun 13 11:11:36 2012
LMS 0: 0 GCS shadows cancelled, 0 closed
Wed Jun 13 11:11:36 2012
LMS 1: 0 GCS shadows cancelled, 0 closed
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Post SMON to start 1st pass IR
Wed Jun 13 11:11:36 2012
LMS 1: 0 GCS shadows traversed, 0 replayed
Wed Jun 13 11:11:36 2012
LMS 0: 0 GCS shadows traversed, 0 replayed
Wed Jun 13 11:11:36 2012
Submitted all GCS remote-cache requests
Fix write in gcs resources
Reconfiguration complete
LCK0 started with pid=20, OS id=16235
Wed Jun 13 11:11:37 2012
ALTER DATABASE MOUNT
Wed Jun 13 11:11:37 2012
This instance was first to mount
Wed Jun 13 11:11:37 2012
Starting background process ASMB
ASMB started with pid=22, OS id=16343
Starting background process RBAL
RBAL started with pid=23, OS id=16347
Wed Jun 13 11:11:44 2012
SUCCESS: diskgroup DATA was mounted
Wed Jun 13 11:11:49 2012
Setting recovery target incarnation to 1
Wed Jun 13 11:11:49 2012
Successful mount of redo thread 2, with mount id 3005745065
Wed Jun 13 11:11:49 2012
Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
Completed: ALTER DATABASE MOUNT
Wed Jun 13 11:22:25 2012
alter database open
This instance was first to open
Wed Jun 13 11:22:26 2012
Beginning crash recovery of 1 threads
parallel recovery started with 2 processes
Wed Jun 13 11:22:26 2012
Started redo scan
Wed Jun 13 11:22:26 2012
Completed redo scan
61 redo blocks read, 4 data blocks need recovery
Wed Jun 13 11:22:26 2012
Started redo application at
Thread 1: logseq 7927, block 3
Wed Jun 13 11:22:26 2012
Recovery of Online Redo Log: Thread 1 Group 1 Seq 7927 Reading mem 0
Mem# 0: +DATA/osista/onlinelog/group_1.283.742132543
Wed Jun 13 11:22:26 2012
Completed redo application
Wed Jun 13 11:22:26 2012
Completed crash recovery at
Thread 1: logseq 7927, block 64, scn 506178382
4 data blocks read, 4 data blocks written, 61 redo blocks read
Switch log for thread 1 to sequence 7928
Picked broadcast on commit scheme to generate SCNs
Wed Jun 13 11:22:27 2012
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=31, OS id=13010
Wed Jun 13 11:22:27 2012
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC1 started with pid=32, OS id=13033
Wed Jun 13 11:22:27 2012
Thread 2 opened at log sequence 7178
Current log# 4 seq# 7178 mem# 0: +DATA/osista/onlinelog/group_4.289.742134597
Successful open of redo thread 2
Wed Jun 13 11:22:27 2012
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Wed Jun 13 11:22:27 2012
ARC0: Becoming the 'no FAL' ARCH
ARC0: Becoming the 'no SRL' ARCH
Wed Jun 13 11:22:27 2012
ARC1: Becoming the heartbeat ARCH
Wed Jun 13 11:22:27 2012
SMON: enabling cache recovery
Wed Jun 13 11:22:30 2012
db_recovery_file_dest_size of 65368 MB is 0.61% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
SUCCESS: diskgroup FLASH was mounted
SUCCESS: diskgroup FLASH was dismounted
Wed Jun 13 11:22:31 2012
Successfully onlined Undo Tablespace 20.
Wed Jun 13 11:22:31 2012
SMON: enabling tx recovery
Wed Jun 13 11:22:32 2012
Database Characterset is AL32UTF8
Wed Jun 13 11:22:32 2012
Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_11751.trc:
ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
Wed Jun 13 11:22:33 2012
Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_11751.trc:
ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
Error 600 happened during db open, shutting down database
USER: terminating instance due to error 600
Instance terminated by USER, pid = 11751
ORA-1092 signalled during: alter database open...
regards,Hi;
Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_9174.trc:Did you check trc file?
ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []You are getting oracle internal error(ORA 600) which mean you could need to work wiht oracle support team. Please see below note, if its not help than i suggest log a sr:
Troubleshoot an ORA-600 or ORA-7445 Error Using the Error Lookup Tool [ID 153788.1]
for your future rac issue please use Forum Home » High Availability » RAC, ASM & Clusterware Installation which is RAC dedicated forum site.
Regard
Helios -
Oracle Forms 10g runtime handling during RAC node failover.
Hi,
Forms version 10g R2 (10.1.2.0.2)
Oracle DB version 10g R2 RAC with 3 nodes.
If the RAC DB node that the user is connected to goes down, the user gets FRM-40733 and ORA-03114 error messages and the client forms application gets locked down/ goes in a loop with the error messages. The user has to close the browser to get out of the loop. I understand that this is the expected behaviour, but I'm wondering whether we can trap the error ORA-03114 and fire the "key-exit" trigger to get out of the application.
Have any one implemented a clean way to exit the Forms application when the RAC DB node goes down..?
I'm looking for some suggestions or an elegant way to handle the above failure.
Thank you in advance.
SudhakarGlen,
I haven't solve this one yet.I have been playing around with the following:
In my environment, I am still using 6i (not web) forms/reports.
My clients are XP, NT, 2000.
I have the forms/report runtime installed on their PCs.
Their TNSNAMES.ORA will be pointing to PRIMARY ( PDB)
If a SWITCHOVER or FAILOVER happens to the physical standby (SDB), I want a trigger to kick a batch file that will manipulate the TNSNAME.ORA on each clients stations.
On the standby
CREATE OR REPLACE TRIGGER change_tns
AFTER DB_ROLE_CHANGE ON DATABASE
DECLARE
role VARCHAR2(30);
dbname varchar2(100);
BEGIN
SELECT
DATABASE_ROLE,
DB_UNIQUE_NAME
INTO
role,
dbname
FROM
V$DATABASE;
IF role = 'PRIMARY' and dbname='SDB' THEN
dbms_scheduler.create_job(
job_name=>'move_sqlnet',
job_type=>'executable',
job_action=>'c:\temp\movetns.cmd',
enabled=TRUE
ELSE
-- if the standby >was< PRIMARY,
-- but the primary comes BACK on line,
-- need to reverse the step above.
END IF;
END;
As for the movetns.cmd
something like
rem -- attach to the workstation,
net use m: \\station name\share name
rem -- stdb_tnsname.ora would be pointing to STANDBY
copy stdb_tnsname.ora m:\orant\net80\tnsname.ora
net use m: /delete
rem -- need to do that for all workstations..
As you can see, there could be lots of problems with this procedure.
Client doesn't know about the failover, starts a RE-BOOT on the pc, therefore, the new tnsnames.ora will not get to client.. what to do for that client? Do i re-run the batch ...every hour?
tell me if you come up with an answer..
p- -
Can't find latest snapshot while generating AWR in RAC
Hello,
we take snapshot every 20 minutes. In RAC (10.2.0.4), instance 2, run query,
SELECT snap_id, begin_interval_time, end_interval_time
FROM dba_hist_snapshot
ORDER BY 1
3391 26-AUG-10 09.00.14.425 AM
26-AUG-10 09.20.05.485 AM
3392 26-AUG-10 09.20.05.485 AM
26-AUG-10 09.41.02.558 AM
try to run AWR by using latest snapshot.
@$ORACLE_HOME/rdbms/admin/awrrpt.sql
Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance
2562639660 xxxx1 2 xxxx2
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type: html
Type Specified: html
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
* 2562639660 2 xxx xxx2 xxx2
2562639660 1 xxx xxxx1 xxx1
Using 2562639660 for database Id
Using 2 for instance number
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing <return> without
specifying a number lists all completed snapshots.
Enter value for num_days:
Instance DB Name Snap Id Snap Started Level
xxx2 xxx1 2836 18 Aug 2010 16:17 1
3179 23 Aug 2010 10:40 1
3180 23 Aug 2010 10:57 1
3182 23 Aug 2010 11:35 1
3187 23 Aug 2010 13:15 1
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 3182
Begin Snapshot Id specified: 3182
Enter value for end_snap: 3187 ---the latest 'valid' snapid is 3187 not 3391???
Report written to test1
---check
select snap_id, DBID, instance_number,SNAP_LEVEL, ERROR_COUNT from dba_hist_snapshot where snap_id = 3187;
SNAP_ID DBID INSTANCE_NUMBER SNAP_LEVEL ERROR_COUNT
3187 2562639660 1 1 0
3187 2562639660 2 1 0
select snap_id, DBID, instance_number,SNAP_LEVEL, ERROR_COUNT from dba_hist_snapshot where snap_id = 3391;
SNAP_ID DBID INSTANCE_NUMBER SNAP_LEVEL ERROR_COUNT
3391 2562639660 1 1 0
seems like when we set up, we didn't collect instance_number 2's snapshot, any suggestion?
Thank youselect job_name, error#, ACTUAL_START_DATE,ADDITIONAL_INFO from dba_scheduler_job_run_details where job_name= 'SNAPSHOT_COLLECTION';
JOB_NAME
ERROR# ACTUAL_START_DATE
ADDITIONAL_INFO
SNAPSHOT_COLLECTION
13516 27-AUG-10 11.00.30.935776 AM CST6CDT
ORA-13516: AWR Operation failed: ORA-13516: AWR Operation failed: INTERVAL Setting is ZERO
ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 10
ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 33
ORA-06512: at line 1
OK, so I reset interval back to 20 (minutes), let me see if it works...
thank you -
Oracle RAC on Suse 9.1 ?
Hi Linux Forum,
For the purpose of a conference demonstration in the beginning of November - this year ;-) - I am planning to set up an Oracle RAC with 10g.
It would be really nice to do this setup on SuSe 9.1 with the 2.6 kernel.
My question is now as simple as:
Should I just forget it, since Oracle RAC does not work with a 2.6 kernel, or could I expect to reach the goal before november?
--FlemmingI would suggest to use Suse Enterprise Edition (30 day eval available) if your committed to suse - i have gotten it to work here.
Otherwise, the easiest path is to use WhiteBox or CentOS linux - both are free versions of Redhat enterprise linux.
Centos.com i think (or .org) - just google for CentOS linux or whitebox linux. THat way you can simply follow the setup guides oracle & redhat provide -
During the installation of grid infra(cluster) for Oracle 11.2 RAC one.
Good Day All, and thanks in advance…
During the installation of grid infrastructure(cluster) for Oracle 11.2 RAC One Node on AIX6.1 ( PROD) , ASM used. I am getting below errors when executing ./root.sh
Upon investigation ,I managed to get note: 1068212.1 from the support oracle site ( see below for details) . I might be hitting Unpublished bug 8670579. I also logged Severity 2 SR with Oracle support to get the bug/patch fix and no one has attended the call.
This might be configuration issue or otherwise , if you have experienced the same issue please assist ? ( if you need more logfiles please feel free to request)….
I ran the Cluster Verify Check – all passed.
Many Thanks
Ezekiel Filane
/u01/app/11.2.0/grid#./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]:
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-10-19 10:33:11: Parsing the host name
2010-10-19 10:33:11: Checking for super user privileges
2010-10-19 10:33:11: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User grid has the required capabilities to run CSSD in realtime mode
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'csgipm'
CRS-2672: Attempting to start 'ora.mdnsd' on 'csgipm'
CRS-2676: Start of 'ora.gipcd' on 'csgipm' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'csgipm' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'csgipm'
CRS-2676: Start of 'ora.gpnpd' on 'csgipm' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'csgipm'
CRS-2676: Start of 'ora.cssdmonitor' on 'csgipm' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'csgipm'
CRS-2672: Attempting to start 'ora.diskmon' on 'csgipm'
CRS-2676: Start of 'ora.diskmon' on 'csgipm' succeeded
CRS-2676: Start of 'ora.cssd' on 'csgipm' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'csgipm'
Start action for daemon aborted
CRS-2674: Start of 'ora.ctssd' on 'csgipm' failed
CRS-2679: Attempting to clean 'ora.ctssd' on 'csgipm'
CRS-2681: Clean of 'ora.ctssd' on 'csgipm' succeeded
CRS-4000: Command Start failed, or completed with errors.
Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl start resource ora.ctssd -init
Start of resource "ora.ctssd -init" failed
Clusterware exclusive mode start of resource ora.ctssd failed
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
CRS-2500: Cannot stop resource 'ora.asm' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl stop resource ora.asm -init
Stop of resource "ora.asm -init" failed
Failed to stop ASM
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'csgipm'
CRS-2677: Stop of 'ora.cssdmonitor' on 'csgipm' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'csgipm'
CRS-2677: Stop of 'ora.cssd' on 'csgipm' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'csgipm'
CRS-2677: Stop of 'ora.gpnpd' on 'csgipm' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'csgipm'
CRS-2677: Stop of 'ora.gipcd' on 'csgipm' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'csgipm'
CRS-2677: Stop of 'ora.mdnsd' on 'csgipm' succeeded
Initial cluster configuration failed. See /u01/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_csgipm.log for details
csgipm:/u01/app/11.2.0/grid#ps -ef | grep pmon
root 6160492 3932160 0 10:54:13 pts/2 0:00 grep pmon
more /u01/app/11.2.0/grid/log/csgipm/client/ocrconfig_5767204.log
csgipm:/usr/sbin#more /u01/app/11.2.0/grid/log/csgipm/client/ocrconfig_5767204.log
2010-10-19 10:33:14.435: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 4
2010-10-19 10:33:14.435: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 5
2010-10-19 10:33:14.435: [ OCRRAW][1]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
2010-10-19 10:33:14.435: [ OCRRAW][1]proprioini: all disks are not OCR/OLR formatted
2010-10-19 10:33:14.435: [ OCRRAW][1]proprinit: Could not open raw device
2010-10-19 10:33:14.442: [ default][1]a_init:7!: Backend init unsuccessful : [26]
2010-10-19 10:33:14.461: [ OCRCONF][1]Exporting OCR data to [OCRUPGRADEFILE]
2010-10-19 10:33:14.461: [ OCRAPI][1]a_init:7!: Backend init unsuccessful : [33]
2010-10-19 10:33:14.461: [ OCRCONF][1]There was no previous version of OCR. error:[PROCL-33: Oracle Local Registry is not configured]
2010-10-19 10:33:14.461: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 0
2010-10-19 10:33:14.461: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 1
2010-10-19 10:33:14.462: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 2
2010-10-19 10:33:14.462: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 3
2010-10-19 10:33:14.462: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 4
2010-10-19 10:33:14.462: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 5
2010-10-19 10:33:14.462: [ OCRRAW][1]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
2010-10-19 10:33:14.462: [ OCRRAW][1]proprioini: all disks are not OCR/OLR formatted
2010-10-19 10:33:14.462: [ OCRRAW][1]proprinit: Could not open raw device
2010-10-19 10:33:14.462: [ default][1]a_init:7!: Backend init unsuccessful : [26]
2010-10-19 10:33:14.462: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 0
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 1
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 2
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 3
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 4
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 5
2010-10-19 10:33:14.463: [ OCRRAW][1]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 0
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 1
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 2
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 3
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 4
2010-10-19 10:33:14.463: [ OCROSD][1]utread:3: Problem reading buffer 104ef000 buflen 4096 retval 0 phy_offset 102400 retry 5
2010-10-19 10:33:14.483: [ OCRRAW][1]ibctx: Failed to read the whole bootblock. Assumes invalid format.
2010-10-19 10:33:14.483: [ OCRRAW][1]proprinit:problem reading the bootblock or superbloc 22
2010-10-19 10:33:14.483: [ OCROSD][1]utread:3: Problem reading buffer 104fe000 buflen 4096 retval 0 phy_offset 102400 retry 0
2010-10-19 10:33:14.483: [ OCROSD][1]utread:3: Problem reading buffer 104fe000 buflen 4096 retval 0 phy_offset 102400 retry 1
2010-10-19 10:33:14.483: [ OCROSD][1]utread:3: Problem reading buffer 104fe000 buflen 4096 retval 0 phy_offset 102400 retry 2
2010-10-19 10:33:14.484: [ OCROSD][1]utread:3: Problem reading buffer 104fe000 buflen 4096 retval 0 phy_offset 102400 retry 3
2010-10-19 10:33:14.484: [ OCROSD][1]utread:3: Problem reading buffer 104fe000 buflen 4096 retval 0 phy_offset 102400 retry 4
2010-10-19 10:33:14.484: [ OCROSD][1]utread:3: Problem reading buffer 104fe000 buflen 4096 retval 0 phy_offset 102400 retry 5
2010-10-19 10:33:14.484: [ OCRRAW][1]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format.
2010-10-19 10:33:14.541: [ OCRAPI][1]a_init:6a: Backend init successful
2010-10-19 10:33:14.646: [ OCRCONF][1]Initialized DATABASE keys
2010-10-19 10:33:14.650: [ OCRCONF][1]Exiting [status=success]...Hi,
We are also trying to install 11.2.0.2 Grid infrastructure for Oracle RAC One Node on AIX 6.1. We did a POC in our lab environment and after much struggle got that working. Now we are building 4 clusters in the production environment and the first cluster installation failed while running root.sh on node2. We already have a Sev1 ticket open with Oracle Support but have not heard anything.
Here is root.sh output from node2. The two node names are p01dou416 and p01dou417.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node p01dou416, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Failed to start Oracle Clusterware stack
Failed to start Cluster Synchorinisation Service in clustered mode at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1020.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
[root@P01DOU417] /u01/app/11.2.0/grid #
LOG output: /u01/app/11.2.0/grid/cfgtoollogs/crsconfig/ rootcrs_p01dou417.log
2010-11-13 17:22:14: Successfully started requested Oracle stack daemons
2010-11-13 17:22:14: Starting CSS in clustered mode
2010-11-13 17:22:14: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl start resource ora.cssd -init
2010-11-13 17:32:28: Command output:
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'p01dou417'
CRS-2672: Attempting to start 'ora.gipcd' on 'p01dou417'
CRS-2676: Start of 'ora.cssdmonitor' on 'p01dou417' succeeded
CRS-2676: Start of 'ora.gipcd' on 'p01dou417' succeeded> CRS-2679: Attempting to clean 'ora.cssd' on 'p01dou417'
CRS-2681: Clean of 'ora.cssd' on 'p01dou417' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'p01dou417'
CRS-2677: Stop of 'ora.diskmon' on 'p01dou417' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'p01dou417'
CRS-2677: Stop of 'ora.gipcd' on 'p01dou417' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'p01dou417'
CRS-2677: Stop of 'ora.cssdmonitor' on 'p01dou417' succeeded
CRS-5804: Communication error with agent process
CRS-4000: Command Start failed, or completed with errors.
End Command output2010-11-13 17:32:28: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl check css
2010-11-13 17:32:28: Command output:
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
End Command output2010-11-13 17:32:28: Checking the status of css
2010-11-13 17:32:33: Executing cmd: /u01/app/11.2.0/grid/bin/crsctl check css
2010-11-13 17:32:33: Command output:
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
End Command output2010-11-13 17:32:33: Checking the status of css
2010-11-13 17:32:38: CRS-2672: Attempting to start 'ora.cssdmonitor' on 'p01dou417'
2010-11-13 17:32:38: CRS-2672: Attempting to start 'ora.gipcd' on 'p01dou417'
2010-11-13 17:32:38: CRS-2676: Start of 'ora.cssdmonitor' on 'p01dou417' succeeded
2010-11-13 17:32:38: CRS-2676: Start of 'ora.gipcd' on 'p01dou417' succeeded
2010-11-13 17:32:38: CRS-2672: Attempting to start 'ora.cssd' on 'p01dou417'
2010-11-13 17:32:38: CRS-2672: Attempting to start 'ora.diskmon' on 'p01dou417'
2010-11-13 17:32:38: CRS-2676: Start of 'ora.diskmon' on 'p01dou417' succeeded
2010-11-13 17:32:38: CRS-2674: Start of 'ora.cssd' on 'p01dou417' failed
2010-11-13 17:32:38: CRS-2679: Attempting to clean 'ora.cssd' on 'p01dou417'
2010-11-13 17:32:38: CRS-2681: Clean of 'ora.cssd' on 'p01dou417' succeeded
2010-11-13 17:32:38: CRS-2673: Attempting to stop 'ora.diskmon' on 'p01dou417'
2010-11-13 17:32:38: CRS-2677: Stop of 'ora.diskmon' on 'p01dou417' succeeded
2010-11-13 17:32:38: CRS-2673: Attempting to stop 'ora.gipcd' on 'p01dou417'
2010-11-13 17:32:38: CRS-2677: Stop of 'ora.gipcd' on 'p01dou417' succeeded
2010-11-13 17:32:38: CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'p01dou417'
2010-11-13 17:32:38: CRS-2677: Stop of 'ora.cssdmonitor' on 'p01dou417' succeeded
2010-11-13 17:32:38: CRS-5804: Communication error with agent process
2010-11-13 17:32:38: CRS-4000: Command Start failed, or completed with errors.
2010-11-13 17:32:38: Failed to start Oracle Clusterware stack
2010-11-13 17:32:38: ###### Begin DIE Stack Trace ######
2010-11-13 17:32:38: Package File Line Calling
2010-11-13 17:32:38: --------------- -------------------- ---- ----------
2010-11-13 17:32:38: 1: main rootcrs.pl 324 crsconfig_lib::dietrap
2010-11-13 17:32:38: 2: crsconfig_lib crsconfig_lib.pm 1020 main::__ANON__
2010-11-13 17:32:38: 3: crsconfig_lib crsconfig_lib.pm 997 crsconfig_lib::start_cluster
2010-11-13 17:32:38: 4: main rootcrs.pl 697 crsconfig_lib::perform_start_cluster
2010-11-13 17:32:38: ####### End DIE Stack Trace #######
2010-11-13 17:32:38: 'ROOTCRS_STACK' checkpoint has failed
Any help on this is appreciated.
Edited by: user12019257 on Nov 17, 2010 1:26 PM -
Configure OEM in Oracle 10.2.0.4 Two nodes RAC
Hi all,
I'm having some troubles making rigth configuration for OEM in my RAC.
Everything during installation or configuracion seems ok but later it's all wrong.
The point is:
Global database is ON and every of two nodes is also ON. I can accees it from inside and outside the server nodes.
CRS shows everything OK with all services and resources ONLINE.
emctl status dbconsole -clusterhttps://node2:5500/em/console/aboutApplication
EM Daemon is running.
emca -displayConfig dbcontrol -clusterINFO:
**************** Current Configurationl ****************
INSTANCE NODE DBCONTROL_UPLOAD_HOST
inode2 node2 node2
inode1 node1 node2
So everything should be OK
When I try to access OEM, it says that status can't be acess and offers me the choice of START or BEGIN RECOVER
When I go through START the two nodes are up but can't access the instances.
So I know that something has wrong configuracion but I don't know WHAT!!!
Any Idea???
Thanks in advanceStart looking at the logs under $OH/<node_name>/sysman/log - most likely it will offer some clues.
-
Best way to have resizeable LUNS for datafiles - non RAC system
All,
(thanks Avi for the help so far, I know its a holiday there so I wait for your return and see if any other users can chip in also)
one of our systems (many on the go here) is being provided by an external vendor - I am reviewing their design and I have some concerns about the LUN's to house the datafiles:
they dont want to pre-assign full size luns - sized for future growth - and want more flexibility to give each env less disk space in the beginning and allocate more as each env grows
they are not going to use RAC (the system has nowhere near the uptime/capacity reqs - and we are removing it as it has caused enormous issues with the previous vendors and their lack of skills with it - we want simplicity)
They have said they do not want to use ASM (I have asked for that previously I think they have never used it before - I may be able to change their minds on this but they are saying as not RAC not needed)
but they are wondering how they give smaller luns to the env and increase size as they grow - but dont want to forever be adding /u0X /u0Y /u0Z extra filesystems (ebusiness suite rapidclone doesn't like working with many filesystem anyway and I find it unelegant to have so many mount points)
they have suggested using large ovm repo's and serving the data filesystems out of that (i have told them to use the repo's just for the guest OS's and use direct phy's attached luns for datafiles (5TB of them)
now they have suggested creating a large LUN (large enuogh for many envs at the same time [dev / test1 / test2 etc]) .... and putting OCFS2 on it so that they can mount it to all the domU/guest's and they can allocate space as needed uot of that:
so that they have guests/VM's (DEV1 - DEV2 - TEST1 say) (all seperate vm's) and all mounting the same OCFS2 cluster filesystem (as /u01 maybe) and they can share that for the datafiles under a sep dir so that each DB VM would see:
/u01/ and as subdirectories to that DEV1 DEV2 TEST1 so:
/u01/DEV1
/u01/DEV2
/u01/TEST1
and only use the right directory for each guests datafiles (thus sharing the space in u01(the big LUN) as needed per env)....
i really dont like that as each guest is going to have the same oracle unix user details and able to write to each other dir's - id prefer dedicated LUNS for each VM - not mounted to many VM's
so I am looking for a way to suggest something better....
should I just insist on ASM (but this is a risk as I fear they are not experienced with it)
or go with OEL/RHEL LVM and standard ext filesystems that can be extended - what are the risks with this? (On A Linux Guest For OVM, Which Partitions Can Be LVM? [ID 1080783.1]) - seems to say there is little performance impact
or is there another option?
Thanks all
Martin
Edited by: Martin Brambley on 11-Jun-2012 08:53Martin, what route did you end up going?
We are about to deploy several hundred OEL VMs that are going to run non-RAC database instances. We don't plan to use ASM either. Our plan right now is to use one large 3TB LUN virtual disk to carve out the operating system space for the VMs and then have a separate physical attached LUN for each VM that will host a /u01 filesystem using LVM. I have concerns with this as we don't know how much space /u01 will ultimately need and if we end up having to extend /u01 on all of these VMs, that sounds like it will be messy. Right now I've got 400 separate 25gb LUNs presented to all of my OVM servers that we plan to use for /u01 filesystems. -
Advice regarding how best to collect stats on 10G RAC Production system
Friends,
I have read quite a lot of blogs and docs and need some help with the best way forward. I am a DBA new to RAC who has limited experience with busy 24@7 10g systems on the scale of my current employer.
Historically stats are gathered here as follows :-
exec dbms_stats.unlock_schema_stats('BP');
exec dbms_stats.gather_schema_stats(ownname => 'BP', cascade => true, estimate_percent => dbms_stats.auto_sample_size);
exec dbms_stats.lock_schema_stats('BP');
Then Flush shared pool ok ????
Because of previous issues with this - alll tables are currently locked and this process is recommended for every 1-2 months rather than daily.
EM Grid Control is used when performance is poor and the sql tuning advisor is run to generate recommendations from which a sql profile could be selected and enabled for the selected code.
My plan is to bring back gathering of stats every 1 to 2 months, my goal is make sure I can fix things quickly if it all goes to custard !!!!
From research it looks like sql_profile is like a hint and independent of gathering stats - it tells optimiser what hints to use when executing sql.
This thread is for advice from professional dba's in my shoes - how do you approach this so that any issues are quickly rectified ???
My thinking is to query dba_profiles and get list of profiles and statuses ... for all tables with sql profiles ..
This is so profiles can be disabled and then quickly enabled if there is a problem aftewr the tables are analyzed.
To revert all the schema stats :-
exec dbms_stats.unlock_schema_stats('BP');
exec dbms_stats.restore_schema_stats(ownname=>'BP',as_of_timestamp=>sysdate-1);
exec dbms_stats.lock_schema_stats('BP');
To revert a table's stats (this looks more finicky so not sure the way to go ...):-
Pre gather stats :-
select stats_update_time from user_tab_stats_history where table_name = ‘<EnterTabName>’;
exec dbms_stats.create_stat_table ( -‘SCOTT’, -‘stattab_new’);
exec dbms_stats.export_table_stats ( -‘SCOTT’, -‘DEPT’, -null, -‘stattab_new’, -null, -true, -‘SCOTT’);
Then later after gather stats :-
exec dbms_stats.restore_table_stats ( -‘SCOTT’, -‘DEPT’, -’21-JAN-09 11.00.00.000000 AM -05:00′);
Enable/Disable Profile
exec dbms_sqltune.alter_sql_profile('<Profile name>', 'STATUS', 'DISABLED');
exec dbms_sqltune.alter_sql_profile('<Profile name>', 'STATUS', 'ENABLED');
I will do the plan below on a test system first however load may not really identify problems until for real on the Prod system.
My plan is to :-
1 analyze all tables as per outline at start above (existing practice)
2 Disable the sql profiles that are in use on the analyzed tables
3 See what code is affected and what tables
If Profile exists for these sql statements then either apply existing profile (as disabled) or use tuning adviser to create another profile
(Advice welcome here - what do you do on big systems ????)
4 If its a catastrophe - I can restore the schema stats using (exec dbms_stats.restore_schema_stats(ownname=>'BP',as_of_timestamp=>sysdate-1);)
and then possibly re-enabling the sql_profiles that were in place before ....
I welcome any advice based on similar experiences that can help me get this right.
Many thanks,
cheers, Rob
Edited by: Diesel on Jun 27, 2012 10:51 PMUseful Link:
http://www.oradev.com/create_statistics.jsp
## Gather schema stats
begin
dbms_stats.gather_schema_stats(ownname=>'SYSLOG');
end;
## Gather a particular table stats of a schema
begin
DBMS_STATS.gather_table_stats(ownname=>'syslog',tabname=>'logs');
end;
Regards
Asif Kabir
--mark the answer as correct/helpful -
How to prevent race conditions in a web application?
Consider an e-commerce site, where Alice and Bob are both editing the product listings. Alice is improving descriptions, while Bob is updating prices. They start editing the Acme Wonder Widget at the same time. Bob finishes first and saves the product with
the new price. Alice takes a bit longer to update the description, and when she finishes, she saves the product with her new description. Unfortunately, she also overwrites the price with the old price, which was not intended.
In my experience, these issues are extremely common in web apps. Some software (e.g. wiki software) does have protection against this - usually the second save fails with "the page was updated while you were editing". But most web sites do not
have this protection.
It's worth noting that the controller methods are thread-safe in themselves. Usually they use database transactions, which make them safe in the sense that if Alice and Bob try to save at the precise same moment, it won't cause corruption. The race condition
arises from Alice or Bob having stale data in their browser.
How can we prevent such race conditions? In particular, I'd like to know:
What techniques can be used? e.g. tracking the time of last change. What are the pros and cons of each.
What is a helpful user experience?
What frameworks have this protection built in?Hi,
>> Consider an e-commerce site, where Alice and Bob are both editing the product listings. Alice is improving descriptions, while Bob is updating
prices. They start editing the Acme Wonder Widget at the same time. Bob finishes first and saves the product with the new price. Alice takes a bit longer to update the description, and when she finishes, she saves the product with her new description. Unfortunately,
she also overwrites the price with the old price, which was not intended.
This is a classic question that you can find in any developing exam :-)
there are several options according the behavior that fit your needs, and several points that need to be taken into consideration.
1. Using locking in the application side, you can make sure that two people do not open the same product for editing. this is in most cases the best option.
* I am not talking about
thread-safe but the implementation is almost the same. The locking can be done using singleton class and have a static boolean element. Every time a user want to edit we check this value as first action. If the value is false then we lock the and
change it to true -> do what ever we need -> change the value to false -> unlock.
Behavior: First person that try to edit lock the product and the second get a message that this product is unders editing. In this case you do not open connection to database and your application prevent any problem.
2. Using "read your writes", as mentioned
Behavior: this mean that several people can open the same product for editing, and only when they try to send it to server they get a message telling them that they have waist their
time and someone else already change the product information. At this point they have two option: (1) overwrite what the other person did, (2) start from the beginning.
This is the way most WIKI websites work.
3. Using real-time web functionality like SignalR, WebSocket, or any streaming for example. In this case you can send the person that work on the edit a message like "this product have already been edit" and stop give him the extra time to
think what you want to do. You will need to use one of the above option maybe, but since the user get the information in real time he have time to chose.
4. Using "Change before Write" or "read before edit": The idea is to have a column that point if the row is in use. the type of this column should be same as the user unique column type. Before the user start you check the value
of this column. If it is 0 then you change it to the user unique value (for example user_id), If the value was not 0 then you know that someone else is editing the product. In this case the locking is managed in the database. make sure that you work with transactions
while reading the value and changing it! you can change the default from share lock to X lock as well during this action, if you really want.
There are several other option, if those do not fits your needs
Ronen Ariely
[Personal Site] [Blog] [Facebook]
Maybe you are looking for
-
Updated to iOS 8.02 calls keep dropping in my car
I just updated to iOS 8.02 and now i can make connections with my car, but the calls keep on dropping? Any help.
-
I am trying to print multiple PowerPoint pages (of one page)from my iMac (OX 10.5.3) to an HP all-in-one printer. When the Print dialoge comes up, it doesn't have a space for number of copies. Is that normal? I've printed PDFs in multiples without a
-
Can't add another line..
I tried to add a line but was unable too because I had an order pending. When I checked to see what the order was it showed it would be complete in November of 2013. I'm having issues with my phone staying charged so I'm unable to call Verizon. What
-
We upgraded to WL 12 and encountered the following error while server startup
e log events will be written to this file.> << UAMS >> v7.6-b2, debug build << UAMS >> Starting UAMS System... UAMS DEBUG: CodeResource XML file is parsed. UAMS DEBUG: CodeResource HashMap is initialized. <Feb 23, 2015 3:09:54 PM IST> <Notice> <Secur
-
How to get the CDP extention from a certificate?
Hi, all Who you all could tell me how can I get the CRL Distribution point(CDP) from a x509 certificate? byte[] cdp = cert.getExtensionValue("2.5.29.31"); The cdp just is a DER-encoded object. how can I parse it? Thanks!