Redo log usage
Dear all,
Coud you please tell me that usage of redo log file.
is it only for recovery process.
shoul we backup it.
regrds
upul indika
Coud you please tell me that usage of redo log file.
typically redo log files are used for media recovery as well it can also be used for instance recovery,it is also used for standby database process.
is it only for recovery process.
shoul we backup it. Redo log is never backed up rather you multiplexed and archived even oracle own backup recovery tool RMAN never backup the redo log files ,redo log is used in cyclic manner if one goes full then it moves to another one (which is called log switch).In no archivelog the redo log is overwritten which will be no useful for the database file right now.In yours database what ever you perform first it goes to redo log and then to data files.You may think why redo log contains both un commited and commited values cause in case of any accident you may recover till before that accident.
Khurram
Similar Messages
-
Usage of Redo Log Groups & Disk Contention
Hi,
I have a peculiar problem here.
I have the redo log groups/members configured in the following manner.Please note that the disks are in the A-B-A-B-A-B sequence for successive redo groups.
GROUP# MEMBER
11 /origlogA/log_g11m1.dbf
11 /mirrlogA/log_g11m2.dbf
12 /origlogB/log_g12m1.dbf
12 /mirrlogB/log_g12m2.dbf
13 /origlogA/log_g13m1.dbf
13 /mirrlogA/log_g13m2.dbf
14 /origlogB/log_g14m1.dbf
14 /mirrlogB/log_g14m2.dbf
15 /origlogA/log_g15m1.dbf
15 /mirrlogA/log_g15m2.dbf
16 /origlogB/log_g16m1.dbf
16 /mirrlogB/log_g16m2.dbf
17 /origlogA/log_g17m1.dbf
17 /mirrlogA/log_g17m2.dbf
18 /origlogB/log_g18m1.dbf
18 /mirrlogB/log_g18m2.dbf
19 /origlogA/log_g19m1.dbf
19 /mirrlogA/log_g19m2.dbf
20 /origlogB/log_g20m1.dbf
20 /mirrlogB/log_g20m2.dbf
21 /origlogA/log_g21m1.dbf
21 /mirrlogA/log_g21m2.dbf
22 /origlogB/log_g22m1.dbf
22 /mirrlogB/log_g22m2.dbf
23 /origlogA/log_g23m1.dbf
23 /mirrlogA/log_g23m2.dbf
24 /origlogB/log_g24m1.dbf
24 /mirrlogB/log_g24m2.dbf
But oracle uses these groups in a zig-zag manner(pls refer the list below).Here, after group# 15, it is group# 11 , which is used. And the members of these two groups are in the same set of disks ie; "/origlogA and /mirrlogA"
(Note:The following result is ordered by sequence #)
GROUP# SEQUENCE# MEMBER
16 263076 /origlogB/log_g16m1.dbf
16 263076 /mirrlogB/log_g16m2.dbf
17 263077 /origlogA/log_g17m1.dbf
17 263077 /mirrlogA/log_g17m2.dbf
18 263078 /origlogB/log_g18m1.dbf
18 263078 /mirrlogB/log_g18m2.dbf
19 263079 /origlogA/log_g19m1.dbf
19 263079 /mirrlogA/log_g19m2.dbf
20 263080 /origlogB/log_g20m1.dbf
20 263080 /mirrlogB/log_g20m2.dbf
21 263081 /origlogA/log_g21m1.dbf
21 263081 /mirrlogA/log_g21m2.dbf
22 263082 /origlogB/log_g22m1.dbf
22 263082 /mirrlogB/log_g22m2.dbf
23 263083 /origlogA/log_g23m1.dbf
23 263083 /mirrlogA/log_g23m2.dbf
24 263084 /origlogB/log_g24m1.dbf
24 263084 /mirrlogB/log_g24m2.dbf
13 263085 /origlogA/log_g13m1.dbf
13 263085 /mirrlogA/log_g13m2.dbf
14 263086 /origlogB/log_g14m1.dbf
14 263086 /mirrlogB/log_g14m2.dbf
15 263087 /origlogA/log_g15m1.dbf
15 263087 /mirrlogA/log_g15m2.dbf
11 263088 /origlogA/log_g11m1.dbf
11 263088 /mirrlogA/log_g11m2.dbf
12 263089 /origlogB/log_g12m1.dbf
12 263089 /mirrlogB/log_g12m2.dbf
Is there anyway, which we can force oracle to use the log groups in the right succession of log groups ? (like 11-12-13-14-15-16-17-18-19-20 etc.).
I want to make sure that there will be no chances of contention, due to the "archiving of the offline redo log & LGWR writing to the online redo log" happening on the same disk.
Thanks in advance,
DonHi,
There's no way to achieve what you're trying to do except:
1/ switch logfile till the current group is the last one.
2/ drop groups from 1 to (last - 2)
3/ create groups 1, 2, 3 (or 11, 12, 13, 14, ... don't care)
4/ Switch logfile Twice
5/ Alter system checkpoint
6/ Drop the former 2 or 3 remaining groups (19, 20, 21, ...)
7/ Recreate them.
But i'd like to point that having them go in order is perfectly useless.
And that you're a priori doing something dangerous. Having your log members on the same disk for the different groups. usually I'd choose to
. Put member 1 on disk 1
. Put member 2 on disk 2
. Increase the number of archiver processes
. ensure disk 1 and 2 are not RAID disk(click)
Regards,
Yoann. -
How to increase the size of Redo log files?
Hi All,
I have 10g R2 RAC on RHEL. As of now, i have 3 redo log files of 50MB size. i have used redo log size advisor by setting fast_start_mttr_target=1800 to check the optimal size of the redologs, it is showing 400MB. Now, i want to increase the size of redo log files. how to increase it?
If we are supposed to do it on production, how to do?
I found the following in one of the article....
"The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance, however it must balanced out with the expected recovery time.Undersized log files increase checkpoint activity and increase CPU usage."
I did not understand the the point however it must balanced out with the expected recovery time in the above given paragraph.
Can anybody help me?
Thanks,
Praveen.You dont have to shutdown the database before dropping redo log group but make sure you have atleast two other redo log groups. Also note that you cannot drop active redo log group.
Here is nice link,
http://www.idevelopment.info/data/Oracle/DBA_tips/Database_Administration/DBA_34.shtml
And make sure you test this in test database first. Production should be touched only after you are really comfortable with this procedure. -
Dear All,
We are usinf ecc5 ans the databse oacle 9i on wondows 2003I have notice that the
Redo log wait S has been suddenly increase in number 690
Please suggest what si the problem and to solve it.
Data buffer
Size kb 1,261,568
Quality % 96.2
Reads 4,234,462,711
Physical reads 160,350,516
writes 3,160,751
Buffer busy waits 1,117,697
Buffer wait time s 3,507
Shared Pool
Size kb 507,904
DD-Cache quality % 84.3
SQL Area getratio % 95.6
pinratio % 98.8
reloads/pins % 0.0297
Log buffer
Size kb 1,176
Entries 11,757,027
Allocation retries 722
Alloc fault rate % 0.0
*Redo log wait s 690*
Log files (in use) 8( 8)
Calls
User calls 41,615,763
commits 367,243
rollbacks 7,890
Recursive calls 100,067,593
Parses 7,822,590
User/Recursive calls 0.4
Reads / User calls 101.8
Time statistics
Busy wait time s 697,392
CPU time s 42,505
Time/User call ms 18
Sessions busy % 9.26
CPU usage % 4.51
CPU count 2
Redo logging
Writes 1,035,582
OS-Blocks written 14,276,056
Latching time s 1
Sessions busy % 9.26
CPU usage % 4.51
CPU count 2
Redo logging
Writes 1,035,582
OS-Blocks written 14,276,056
Latching time s 1
Write time s 806
Mb written 6,574
Table scans & fetches
Short table scans 607,891
Long table scans 32,468
Fetch by rowid 1,620,054,083
by continued row 761,131
Sorts
Memory 3,046,669
Disk 32
Rows sorted 446,593,854
Regards,
ShivaHi Stefan,
As per the doc you have suggest. The details are as following.
In the day there is only 24 log switch , but in hour there is no more than 10 to 15 as per doc ,so ti is very less.
The DD-Cache quality % 84.1 is less
The elapsed time since start
Elapsed since start (s) 540,731
Log buffer
Size kb 1,176
Entries 13,449,901
Allocation retries 767
Alloc fault rate % 0.0
*Redo log wait s 696*
Log files (in use) 8( 8)
Check DB Wait times
TCode ST04->Detail Analysis Menu->Wait Events
Statistics on total waits for an event
Elapsed time: 985 s
since reset at 09:34:06
Type Client Sessions Busy wait Total wait Busy wait
time (ms) time (ms) time (%)
USER User 40 1,028,710 17,594,230 5.85
BACK ARC0 1 2,640 1,264,410 0.21
BACK ARC1 1 540 1,020,400 0.05
BACK CKPT 1 950 987,490 0.10
BACK DBW0 1 130 983,920 0.01
BACK LGWR 1 160 986,430 0.02
BACK PMON 1 0 987,000 0.00
BACK RECO 1 10 1,800,010 0.00
BACK SMON 1 3,820 1,179,410 0.32
Disk based sorts
Sorts
Memory 3,443,693
Disk 41
Rows sorted 921,591,847
Check DB Shared Pool Quality
Shared Pool
Size kb 507,904
DD-Cache quality % 84.1
SQL Area getratio % 95.6
pinratio % 98.8
reloads/pins % 0.0278
V$LOGHIST
THREAD# SEQUENCE# FIRST_CHANGE# FIRST_TIME SWITCH_CHANGE#
1 31612 381284375 2008/11/13 00:01:29 381293843
1 31613 381293843 2008/11/13 00:12:12 381305142
1 31614 381305142 2008/11/13 03:32:39 381338724
1 31615 381338724 2008/11/13 06:29:21 381362057
1 31616 381362057 2008/11/13 07:00:39 381371178
1 31617 381371178 2008/11/13 07:13:01 381457916
1 31618 381457916 2008/11/13 09:26:17 381469012
1 31619 381469012 2008/11/13 10:27:19 381478636
1 31620 381478636 2008/11/13 10:59:54 381488508
1 31621 381488508 2008/11/13 11:38:33 381498759
1 31622 381498759 2008/11/13 12:05:14 381506545
1 31623 381506545 2008/11/13 12:33:48 381513732
1 31624 381513732 2008/11/13 13:08:10 381521338
1 31625 381521338 2008/11/13 13:50:15 381531371
1 31626 381531371 2008/11/13 14:38:36 381540689
1 31627 381540689 2008/11/13 15:02:19 381549493
1 31628 381549493 2008/11/13 15:43:39 381556307
1 31629 381556307 2008/11/13 16:07:47 381564737
1 31630 381564737 2008/11/13 16:39:45 381571786
1 31631 381571786 2008/11/13 17:07:07 381579026
1 31632 381579026 2008/11/13 17:37:26 381588121
1 31633 381588121 2008/11/13 18:28:58 381595963
1 31634 381595963 2008/11/13 20:00:41 381602469
1 31635 381602469 2008/11/13 22:23:20 381612866
1 31636 381612866 2008/11/14 00:01:28 381622652
1 31637 381622652 2008/11/14 00:09:52 381634720
1 31638 381634720 2008/11/14 03:32:00 381688156
1 31639 381688156 2008/11/14 07:00:30 381703441
14.11.2008 Log File information from control file 10:01:32
Group Thread Sequence Size Nr of Archive First Time 1st SCN
Nr Nr Nr (bytes) Members Status Change Nr in log
1 1 31638 52428800 2 YES INACTIVE 381634720 2008/11/14 03:32:00
2 1 31639 52428800 2 YES INACTIVE 381688156 2008/11/14 07:00:30
3 1 31641 52428800 2 NO CURRENT 381783353 2008/11/14 09:50:09
4 1 31640 52428800 2 YES ACTIVE 381703441 2008/11/14 07:15:07
Regards, -
Hello,
Our DB is having very high redo log space wait time :
redo log space requests 867527
redo log space wait time 67752674
LOG_BUFFER is 14 MB and having 6 redo logs groups and the size of redo log file is 500MB for each log file.
Also, the amount of redo generated per hour :
START_DATE START NUM_LOGS MBYTES DBNAME
2008-07-03 10:00 2 1000 TKL
2008-07-03 11:00 4 2000 TKL
2008-07-03 12:00 3 1500 TKL
Does increasing the size of LOG_BUFFER will help to reduce the redo log space wait ?
Thanks in advance ,
Regards,
AmanLooking quickly over the AWR report provided the following information could be helpful:
1. You are currently targeting approx. 6GB of memory with this single instance and the report tells that physical memory is 8GB. According to the advisories it looks like you could decrease your memory allocation without tampering your performance.
In particular the large_pool_size setting seems to be quite high although you're using shared servers.
Since you're using 10.2.0.4 it might be worth to think about using the single SGA_TARGET parameter instead of the specifying all the single parameters. This allows Oracle to size the shared pool components within the given target dynamically.
2. You are currently using a couple of underscore parameters. In particular the "_optimizer_max_permutations" parameter is set to 200 which might reduce significantly the number of execution plans permutations Oracle is investigating while optimizing the statement and could lead to suboptimal plans. It could be worth to check why this has been set.
In addition you are using a non-default setting of "_shared_pool_reserved_pct" which might no longer be necessary if you are using the SGA_TARGET parameter as mentioned above.
3. You are using non-default settings for the "optimizer_index_caching" and "optimizer_index_cost_adj" parameters which favor index-access paths / nested loops. Since the "db file sequntial read" is the top wait event it might be worth to check if the database is doing too excessive index access. Also most of the rows have been fetched by rowid (table fetch by rowid) which could also be an indicator for excessive index access/nested loop usage.
4. You database has been working quite a lot during the 30min. snapshot interval: It processed 123.000.000 logical blocks, which means almost 0.5GB per second. Check the top SQLs, there are a few that are responsible for most of the blocks processed. E.g. there is a anonymous PL/SQL block that has been executed almost 17.000 times during the interval representing 75% of the blocks processed. The statements executed as part of these procedures might be worth to check if they could be tuned to require less logical I/Os. This could be related to the non-default optimizer parameters mentioned above.
5. You are still using the compatible = 9.2.0 setting which means this database could still be opened by a 9i instance. If this is no longer required, you might lift this to the default value of 10g. This will also convert the REDO format to 10g I think which could lead to less amount of redo generated. But be aware of the fact that this is a one-way operation, you can only go back to 9i then via a restore once the compatible has been set to 10.x.
6. Your undo retention is set quite high (> 6000 secs), although your longest query in the AWR period was 151 seconds. It might be worth to check if this setting is reasonable, as you might have quite a large undo tablespace at present. Oracle 10g ignores the setting if it isn't able to honor the setting given the current Undo tablespace size.
7. "parallel_max_servers" has been set to 0, so no parallel operations can take place. This might be intentional but it's something to keep in mind.
Regards,
Randolf
Oracle related stuff:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle:
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Recursive call with commit not written to redo log
In my DBA training I was led to the belief that a commit caused the log writer to write the redo log buffer to the redo log file, but I find this is not true if the commit is inside recursive code.
I believe this is intentional as a way off reducing i/o but it does raise data integrity problems.
Apparently if you have some PL/SQL (can be sql*plus code or procedure) with a loop containing a commit,
the commit does not actually ensure that the transaction is written to the Redo log.
Instead Oracle only ensures all is written to the redo log when control is returned to the application/sqlplus.
You can see this by checking the redo writes in v$sysstat.
It will be less than the number of expected commits.
Thus the old rule of expecting all committed transation to be there after a database recovery is not necessarily true.
Does anyone know how to force the commit to ensure redo is written
-inside pl/sql or perhaps a setting in the the calling environment ?
ThanksThanks for your input
The trouble is that I believe if you stopped in a debugger the log writer would catch up -
Or if you killed your instance in the middle of this test you wouldn't be sure how many commits the db thought it did
ie. the db would recover to the last known commit in the redo logs
- maybe I should turn on tracing ?
Since my question I have a found a site that seems to back up the results I am getting
http://www.ixora.com.au/notes/redo_write_triggers.htm
see the note under point 3
Have a look at the stats below and you will see
redo writes 19026
user commits 100057
How I actually tested was to run
the utlbstat scipt
then run some pl/sql that
- mimiced a business transactions (4 select lookup validations, 2 inserts and 1 insert or update and a commit)
- loop * 100000
then ran utlestat.sql
i.e. test script
@C:\oracle\ora92\rdbms\admin\utlbstat.sql
connect test/test
@c:\mig\Load_test.sql
@C:\oracle\ora92\rdbms\admin\utlestat.sql
Statistic Total Per Transact Per Logon Per Second
CPU used by this session 37433 .37 935.83 79.31
CPU used when call started 37434 .37 935.85 79.31
CR blocks created 62 0 1.55 .13
DBWR checkpoint buffers wri 37992 .38 949.8 80.49
DBWR checkpoints 6 0 .15 .01
DBWR transaction table writ 470 0 11.75 1
DBWR undo block writes 22627 .23 565.68 47.94
SQL*Net roundtrips to/from 4875 .05 121.88 10.33
background checkpoints comp 5 0 .13 .01
background checkpoints star 6 0 .15 .01
background timeouts 547 .01 13.68 1.16
branch node splits 4 0 .1 .01
buffer is not pinned count 4217 .04 105.43 8.93
buffer is pinned count 649 .01 16.23 1.38
bytes received via SQL*Net 1027466 10.27 25686.65 2176.83
bytes sent via SQL*Net to c 5237709 52.35 130942.73 11096.84
calls to get snapshot scn: 1514482 15.14 37862.05 3208.65
calls to kcmgas 303700 3.04 7592.5 643.43
calls to kcmgcs 215 0 5.38 .46
change write time 4419 .04 110.48 9.36
cleanout - number of ktugct 1875 .02 46.88 3.97
cluster key scan block gets 101 0 2.53 .21
cluster key scans 49 0 1.23 .1
commit cleanout failures: b 27 0 .68 .06
commit cleanouts 1305175 13.04 32629.38 2765.2
commit cleanouts successful 1305148 13.04 32628.7 2765.14
commit txn count during cle 3718 .04 92.95 7.88
consistent changes 752 .01 18.8 1.59
consistent gets 1514852 15.14 37871.3 3209.43
consistent gets - examinati 1005941 10.05 25148.53 2131.23
data blocks consistent read 752 .01 18.8 1.59
db block changes 3465329 34.63 86633.23 7341.8
db block gets 3589136 35.87 89728.4 7604.1
deferred (CURRENT) block cl 1068723 10.68 26718.08 2264.24
enqueue releases 805858 8.05 20146.45 1707.33
enqueue requests 805852 8.05 20146.3 1707.31
execute count 1004701 10.04 25117.53 2128.6
free buffer requested 36371 .36 909.28 77.06
hot buffers moved to head o 3801 .04 95.03 8.05
immediate (CURRENT) block c 3894 .04 97.35 8.25
index fast full scans (full 448 0 11.2 .95
index fetch by key 201128 2.01 5028.2 426.12
index scans kdiixs1 501268 5.01 12531.7 1062.01
leaf node splits 1750 .02 43.75 3.71
logons cumulative 2 0 .05 0
messages received 19465 .19 486.63 41.24
messages sent 19465 .19 486.63 41.24
no work - consistent read g 3420 .03 85.5 7.25
opened cursors cumulative 201103 2.01 5027.58 426.07
opened cursors current -3 0 -.08 -.01
parse count (hard) 4 0 .1 .01
parse count (total) 201103 2.01 5027.58 426.07
parse time cpu 2069 .02 51.73 4.38
parse time elapsed 2260 .02 56.5 4.79
physical reads 6600 .07 165 13.98
physical reads direct 75 0 1.88 .16
physical writes 38067 .38 951.68 80.65
physical writes direct 75 0 1.88 .16
physical writes non checkpo 34966 .35 874.15 74.08
prefetched blocks 2 0 .05 0
process last non-idle time 1029203858 10286.18 25730096.45 2180516.65
recursive calls 3703781 37.02 92594.53 7846.99
recursive cpu usage 35210 .35 880.25 74.6
redo blocks written 1112273 11.12 27806.83 2356.51
redo buffer allocation retr 21 0 .53 .04
redo entries 1843462 18.42 46086.55 3905.64
redo log space requests 17 0 .43 .04
redo log space wait time 313 0 7.83 .66
redo size 546896692 5465.85 13672417.3 1158679.43
redo synch time 677 .01 16.93 1.43
redo synch writes 63 0 1.58 .13
redo wastage 4630680 46.28 115767 9810.76
redo write time 64354 .64 1608.85 136.34
redo writer latching time 42 0 1.05 .09
redo writes 19026 .19 475.65 40.31
rollback changes - undo rec 10 0 .25 .02
rollbacks only - consistent 122 0 3.05 .26
rows fetched via callback 1040 .01 26 2.2
session connect time 1029203858 10286.18 25730096.45 2180516.65
session logical reads 5103988 51.01 127599.7 10813.53
session pga memory -263960 -2.64 -6599 -559.24
session pga memory max -788248 -7.88 -19706.2 -1670.02
session uga memory -107904 -1.08 -2697.6 -228.61
session uga memory max 153920 1.54 3848 326.1
shared hash latch upgrades 501328 5.01 12533.2 1062.14
sorts (memory) 1467 .01 36.68 3.11
sorts (rows) 38796 .39 969.9 82.19
switch current to new buffe 347 0 8.68 .74
table fetch by rowid 1738 .02 43.45 3.68
table scan blocks gotten 424 0 10.6 .9
table scan rows gotten 4164 .04 104.1 8.82
table scans (short tables) 451 0 11.28 .96
transaction rollbacks 5 0 .13 .01
user calls 5912 .06 147.8 12.53
user commits 100057 1 2501.43 211.99
user rollbacks 56 0 1.4 .12
workarea executions - optim 1676 .02 41.9 3.55
write clones created in bac 5 0 .13 .01
write clones created in for 745 .01 18.63 1.58
99 rows selected. -
11gR2
Found out log switched every a few mins (2, 3 mins) at peak periods. I will make the recommendation for larger redo logs such that switches will be every 20 to 30 mins .
Wanted to know what would be negative side of large redo logs?>
11gR2
Found out log switched every a few mins (2, 3 mins) at peak periods. I will make the recommendation for larger redo logs such that switches will be every 20 to 30 mins .
Wanted to know what would be negative side of large redo logs?
>
Patience, grasshopper!
If it ain't broke, don't fix it. So first make sure it is broken, or about to break.
Unless you have an emergency on your hands you don't want to implement a change like that without careful examination of your current log file usage and history.
You need to provide more information such as typical size of your log files, number of log groups, number of members in each group, log archive policy, etc.
1. How often do these 'peak periods' occur? Do they occur fewer than 5 or 6 times a day? Or dozens of times?
2. How long do they last? A few minutes? Or a few hours?
3. What is the typical, non-peak rate of switches? This is really your base-line that you need to compare things to.
4. What has the switch pattern been over the last few weeks or months?
5. What has the growth in DB activity benn over the last few weeks or months? What do expect over the next few months?
6. What is your goal in reducing the frequency of log switches?
Basic negative sides include longer time to archive each log file (the fewer logs in each group the more impact here) and the length of time to recover in the event you need to. With large log files there is more for Oracle to wade through to find the relevant data for restoring the DB to a given point in time.
Your suggestion of every 20 - 30 minutes means 2 to 3 times per hour. If you currently switch 10 or 12 times per hour you are making a very big change.
Although you don't want to 'tweak' the logs unnecessarily you also don't want to make such large changes.
Everything in moderation. If your current switch rate is 10 or 12 times per hour you may want to first cut this to maybe 1/2 to 1/3: that is, to 4 or 5 time per hour. It all depends on the answers to questions like I ask above. If you post the answers to know if will be helpful to anyone trying to advise you. -
Archive logging, single redo log = archivelog?
We are running archive log mode. I had always assumed when a redolog was full, it was written to a new archivelog file and that was it.
However, our archivelog files are 200K and our redolog files are 20K... meaning you could fit 10 redologs into a single archivelog file.
We have 2 groups (1 and 2) and each has 5 members redo01-05.
ThanksYou're completely confused bout GROUPs and MEMBERS !
Have a look at this configuration:
TEST> select group#, member from v$logfile order by group#;
GROUP# MEMBER
1 /gnxprod/logA/redo0101.log
1 /gnxprod/logB/redo0102.log
2 /gnxprod/logB/redo0202.log
2 /gnxprod/logA/redo0201.log
3 /gnxprod/logA/redo0301.log
3 /gnxprod/logB/redo0302.log
4 /gnxprod/logB/redo0402.log
4 /gnxprod/logA/redo0401.log
5 /gnxprod/logB/redo0502.log
5 /gnxprod/logA/redo0501.log
10 rows selected.This means that there are 5 groups, each containing 2 members. During redo log file usage, Oracle will write to one of these group, then the following, then the following.
Note that the "next" group is not the "sequential next" - i.e. not mandatorilly 5 after 4 then 1 after 5.
We use multiple GROUPs mainly because of archiving and redo necessity for instance crash recovery.
We use multiple (multiplexed) group MEMBERS because of hardware failure risk.
There's too much to tell straight away. You should rush for the manual (http://tahiti.oracle.com), it's all explained in the Conceps and Admin books.
Yoann. -
Too much redo log files...
Hi,
I have a very light application in Oracle 9.2.0.7 in Linux-32bits that is generating 400 logfiles a day. I can´t find why those logs are being generated!
The only thing relevant in that application is a big table that serves only for insert command (1000 per hour) for audit reasons. But this table was created with NOLOGGING option.
Redo Size: 4 groups of 40 Mb each.
The insert statement uses a sequence to generate a unique key. Is this sequence causing my big logfile generation?
Thanks,
Paulo.Here is the statspack:
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
DB 378381468 DB 1 9.2.0.7.0 NO host
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 12 28-Jun-07 11:05:11 26 1,198.7
End Snap: 13 28-Jun-07 12:05:24 29 1,077.2
Elapsed: 60.22 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 512M Std Block Size: 8K
Shared Pool Size: 512M Log Buffer: 5,120K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 281,252.38 2,073.48
Logical reads: 73,113.76 539.02
Block changes: 3,133.29 23.10
Physical reads: 3.24 0.02
Physical writes: 21.39 0.16
User calls: 26.12 0.19
Parses: 145.64 1.07
Hard parses: 0.81 0.01
Sorts: 138.33 1.02
Logons: 0.69 0.01
Executes: 443.27 3.27
Transactions: 135.64
% Blocks changed per Read: 4.29 Recursive Call %: 98.97
Rollback per transaction %: 0.13 Rows per Sort: 17.26
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 99.99
Library Hit %: 99.66 Soft Parse %: 99.44
Execute to Parse %: 67.14 Latch Hit %: 99.93
Parse CPU to Parse Elapsd %: 55.03 % Non-Parse CPU: 99.22
Shared Pool Statistics Begin End
Memory Usage %: 91.06 91.23
% SQL with executions>1: 44.54 39.78
% Memory for SQL w/exec>1: 43.09 33.89
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
CPU time 3,577 84.73
log file parallel write 854,726 359 8.51
row cache lock 56,780 104 2.47
process startup 172 91 2.16
SQL*Net message from dblink 5,001 22 .53
Wait Events for DB: DB Instance: DB Snaps: 12 -13
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file parallel write 854,726 0 359 0 1.7
row cache lock 56,780 0 104 2 0.1
process startup 172 4 91 530 0.0
SQL*Net message from dblink 5,001 0 22 4 0.0
log file sync 3,015 3 19 6 0.0
enqueue 471 1 9 20 0.0
buffer busy waits 20,290 0 8 0 0.0
db file sequential read 3,853 0 6 2 0.0
SQL*Net more data from dblin 88,584 0 5 0 0.2
control file parallel write 1,704 0 5 3 0.0
latch free 1,404 748 4 3 0.0
single-task message 134 0 4 27 0.0
LGWR wait for redo copy 8,230 1 2 0 0.0
log file switch completion 60 0 2 32 0.0
log file sequential read 1,333 0 2 1 0.0
control file sequential read 4,530 0 1 0 0.0
db file scattered read 246 0 0 1 0.0
SQL*Net more data to client 7,292 0 0 0 0.0
SQL*Net break/reset to clien 72 0 0 1 0.0
db file parallel write 4,568 0 0 0 0.0
log file single write 62 0 0 0 0.0
async disk IO 3,410 0 0 0 0.0
SQL*Net message to dblink 5,001 0 0 0 0.0
direct path read (lob) 84 0 0 0 0.0
direct path read 318 0 0 0 0.0
direct path write 312 0 0 0 0.0
buffer deadlock 115 115 0 0 0.0
SQL*Net message from client 86,475 0 27,758 321 0.2
jobq slave wait 4,594 4,532 13,455 2929 0.0
SQL*Net more data from clien 602 0 1 2 0.0
SQL*Net message to client 86,481 0 0 0 0.2
Background Wait Events for DB: DB Instance: DB Snaps: 12 -13
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file parallel write 854,744 0 359 0 1.7
control file parallel write 1,704 0 5 3 0.0
LGWR wait for redo copy 8,230 1 2 0 0.0
log file sequential read 1,333 0 2 1 0.0
control file sequential read 1,849 0 1 1 0.0
db file parallel write 4,567 0 0 0 0.0
latch free 74 0 0 0 0.0
rdbms ipc reply 65 0 0 0 0.0
log file single write 62 0 0 0 0.0
async disk IO 3,410 0 0 0 0.0
db file sequential read 1 0 0 8 0.0
buffer busy waits 5 0 0 0 0.0
direct path read 248 0 0 0 0.0
direct path write 248 0 0 0 0.0
rdbms ipc message 868,357 6,776 30,095 35 1.8
pmon timer 1,204 1,204 3,529 2931 0.0
smon timer 154 0 3,514 22816 0.0
Instance Activity Stats for DB: DB Instance: DB Snaps: 12 -13
Statistic Total per Second per Trans
active txn count during cleanout 2,844 0.8 0.0
background checkpoints completed 31 0.0 0.0
background checkpoints started 31 0.0 0.0
background timeouts 7,956 2.2 0.0
branch node splits 15 0.0 0.0
buffer is not pinned count 324,721,116 89,875.8 662.6
buffer is pinned count 308,901,876 85,497.3 630.3
bytes received via SQL*Net from c 8,048,130 2,227.6 16.4
bytes received via SQL*Net from d 181,575,342 50,256.1 370.5
bytes sent via SQL*Net to client 33,964,494 9,400.6 69.3
bytes sent via SQL*Net to dblink 933,170 258.3 1.9
calls to get snapshot scn: kcmgss 9,900,434 2,740.2 20.2
calls to kcmgas 985,222 272.7 2.0
calls to kcmgcs 11,669 3.2 0.0
change write time 9,910 2.7 0.0
cleanout - number of ktugct calls 18,903 5.2 0.0
cleanouts and rollbacks - consist 33 0.0 0.0
cleanouts only - consistent read 932 0.3 0.0
cluster key scan block gets 289,955 80.3 0.6
cluster key scans 101,840 28.2 0.2
commit cleanout failures: block l 0 0.0 0.0
commit cleanout failures: buffer 113 0.0 0.0
commit cleanout failures: callbac 96 0.0 0.0
commit cleanout failures: cannot 3,095 0.9 0.0
commit cleanouts 1,966,376 544.3 4.0
commit cleanouts successfully com 1,963,072 543.3 4.0
commit txn count during cleanout 309,283 85.6 0.6
consistent changes 5,245,452 1,451.8 10.7
consistent gets 242,967,989 67,248.3 495.8
consistent gets - examination 135,768,580 37,577.8 277.0
CPU used by this session 357,659 99.0 0.7
CPU used when call started 344,951 95.5 0.7
CR blocks created 768 0.2 0.0
current blocks converted for CR 0 0.0 0.0
cursor authentications 886 0.3 0.0
data blocks consistent reads - un 1,760 0.5 0.0
db block changes 11,320,580 3,133.3 23.1
db block gets 21,192,200 5,865.5 43.2
DBWR buffers scanned 0 0.0 0.0
DBWR checkpoint buffers written 69,649 19.3 0.1
DBWR checkpoints 31 0.0 0.0
DBWR free buffers found 0 0.0 0.0
DBWR lru scans 0 0.0 0.0
DBWR make free requests 0 0.0 0.0
DBWR revisited being-written buff 0 0.0 0.0
DBWR summed scan depth 0 0.0 0.0
DBWR transaction table writes 2,070 0.6 0.0
DBWR undo block writes 44,323 12.3 0.1
deferred (CURRENT) block cleanout 745,333 206.3 1.5
dirty buffers inspected 1 0.0 0.0
enqueue conversions 8,193 2.3 0.0
enqueue deadlocks 1 0.0 0.0
enqueue releases 2,002,960 554.4 4.1
enqueue requests 2,002,963 554.4 4.1
enqueue timeouts 3 0.0 0.0
enqueue waits 451 0.1 0.0
Instance Activity Stats for DB: DB Instance: DB Snaps: 12 -13
Statistic Total per Second per Trans
exchange deadlocks 115 0.0 0.0
execute count 1,601,528 443.3 3.3
free buffer inspected 30 0.0 0.0
free buffer requested 1,196,628 331.2 2.4
hot buffers moved to head of LRU 26,707 7.4 0.1
immediate (CR) block cleanout app 965 0.3 0.0
immediate (CURRENT) block cleanou 10,817 3.0 0.0
index fast full scans (full) 0 0.0 0.0
index fetch by key 131,028,270 36,265.8 267.4
index scans kdiixs1 17,868,907 4,945.7 36.5
leaf node splits 4,528 1.3 0.0
leaf node 90-10 splits 3,017 0.8 0.0
logons cumulative 2,499 0.7 0.0
messages received 859,631 237.9 1.8
messages sent 859,631 237.9 1.8
no buffer to keep pinned count 21,253 5.9 0.0
no work - consistent read gets 87,667,752 24,264.5 178.9
opened cursors cumulative 528,984 146.4 1.1
OS Involuntary context switches 0 0.0 0.0
OS Page faults 0 0.0 0.0
OS Page reclaims 0 0.0 0.0
OS System time used 0 0.0 0.0
OS User time used 0 0.0 0.0
OS Voluntary context switches 0 0.0 0.0
parse count (failures) 7 0.0 0.0
parse count (hard) 2,928 0.8 0.0
parse count (total) 526,209 145.6 1.1
parse time cpu 2,778 0.8 0.0
parse time elapsed 5,048 1.4 0.0
physical reads 11,690 3.2 0.0
physical reads direct 6,698 1.9 0.0
physical reads direct (lob) 102 0.0 0.0
physical writes 77,270 21.4 0.2
physical writes direct 7,620 2.1 0.0
physical writes direct (lob) 0 0.0 0.0
physical writes non checkpoint 33,360 9.2 0.1
pinned buffers inspected 0 0.0 0.0
prefetched blocks 799 0.2 0.0
prefetched blocks aged out before 0 0.0 0.0
process last non-idle time 3,630 1.0 0.0
recursive calls 9,053,277 2,505.8 18.5
recursive cpu usage 255,973 70.9 0.5
redo blocks written 2,572,625 712.1 5.3
redo buffer allocation retries 50 0.0 0.0
redo entries 3,074,994 851.1 6.3
redo log space requests 60 0.0 0.0
redo log space wait time 193 0.1 0.0
redo ordering marks 0 0.0 0.0
redo size 1,016,164,852 281,252.4 2,073.5
redo synch time 1,956 0.5 0.0
redo synch writes 5,317 1.5 0.0
redo wastage 259,689,040 71,876.3 529.9
redo write time 37,488 10.4 0.1
redo writer latching time 242 0.1 0.0
redo writes 854,744 236.6 1.7
rollback changes - undo records a 1,098 0.3 0.0
Instance Activity Stats for DB: DB Instance: DB Snaps: 12 -13
Statistic Total per Second per Trans
rollbacks only - consistent read 747 0.2 0.0
rows fetched via callback 117,908,375 32,634.5 240.6
session connect time 0 0.0 0.0
session cursor cache count 16 0.0 0.0
session cursor cache hits 484,372 134.1 1.0
session logical reads 264,160,020 73,113.8 539.0
session pga memory 16,473,320 4,559.5 33.6
session pga memory max 16,914,080 4,681.5 34.5
session uga memory 17,216,514,728 4,765,157.7 35,130.3
session uga memory max 1,865,036,296 516,201.6 3,805.6
shared hash latch upgrades - no w 17,251,803 4,774.9 35.2
shared hash latch upgrades - wait 24,671 6.8 0.1
sorts (disk) 32 0.0 0.0
sorts (memory) 499,747 138.3 1.0
sorts (rows) 8,626,333 2,387.6 17.6
SQL*Net roundtrips to/from client 80,069 22.2 0.2
SQL*Net roundtrips to/from dblink 5,001 1.4 0.0
summed dirty queue length 0 0.0 0.0
switch current to new buffer 1 0.0 0.0
table fetch by rowid 238,882,317 66,117.4 487.4
table fetch continued row 4,436,670 1,228.0 9.1
table scan blocks gotten 5,066,302 1,402.2 10.3
table scan rows gotten 134,679,712 37,276.4 274.8
table scans (direct read) 0 0.0 0.0
table scans (long tables) 447 0.1 0.0
table scans (short tables) 152,382 42.2 0.3
transaction rollbacks 530 0.2 0.0
transaction tables consistent rea 0 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
user calls 94,382 26.1 0.2
user commits 489,423 135.5 1.0
user rollbacks 653 0.2 0.0
write clones created in backgroun 11 0.0 0.0
write clones created in foregroun 878 0.2 0.0
Tablespace IO Stats for DB: DB Instance: DB Snaps: 12 -13
->ordered by IOs (Reads + Writes) desc
Tablespace
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
T1_UNDO
31 0 0.0 1.0 46,535 13 344 0.4
T1
31 0 0.0 1.0 13,754 4 3,657 0.4
T2
3,308 1 0.8 1.1 2,973 1 0 0.0
T3
31 0 0.0 1.0 5,710 2 16,240 0.4
T4
555 0 4.0 1.0 600 0 0 0.0
SYSTEM
429 0 3.9 2.5 280 0 49 0.2
TEMP
134 0 0.4 48.1 238 0 0 0.0
T1_16K
31 0 0.0 1.0 31 0 0 0.0
T2_16K
31 0 0.0 1.0 31 0 0 0.0
Buffer Pool Statistics for DB: DB Instance: DB Snaps: 12 -13
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Write Buffer
Number of Cache Buffer Physical Physical Buffer Complete Busy
P Buffers Hit % Gets Reads Writes Waits Waits Waits
D 49,625 100.0 263,975,320 4,909 69,666 0 0 20,290
16k 7,056 100.0 30 0 0 0 0 0
Instance Recovery Stats for DB: DB Instance: DB Snaps: 12 -13
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
B 0 0 10518 10000 73728 186265 10000
E 0 0 13189 10000 73728 219498 10000
Buffer Pool Advisory for DB: DB Instance: DB End Snap: 13
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Size for Size Buffers for Est Physical Estimated
P Estimate (M) Factr Estimate Read Factor Physical Reads
D 32 .1 3,970 205.60 4,726,309,734
D 64 .2 7,940 111.86 2,571,419,284
D 96 .2 11,910 59.99 1,379,092,849
D 128 .3 15,880 32.24 741,224,090
D 160 .4 19,850 16.05 369,050,333
D 192 .5 23,820 1.28 29,352,221
D 224 .6 27,790 1.05 24,077,507
D 256 .6 31,760 1.03 23,723,389
D 288 .7 35,730 1.02 23,518,434
D 320 .8 39,700 1.01 23,328,106
D 352 .9 43,670 1.01 23,193,257
D 384 1.0 47,640 1.00 23,064,957
D 400 1.0 49,625 1.00 22,987,576
D 416 1.0 51,610 1.00 22,927,325
D 448 1.1 55,580 0.99 22,824,032
D 480 1.2 59,550 0.99 22,713,509
D 512 1.3 63,520 0.99 22,649,147
D 544 1.4 67,490 0.98 22,605,489
D 576 1.4 71,460 0.98 22,525,897
D 608 1.5 75,430 0.97 22,407,418
D 640 1.6 79,400 0.96 22,022,381
16k 16 .1 1,008 1.00 139,218,299
16k 32 .3 2,016 1.00 139,211,699
16k 48 .4 3,024 1.00 139,207,678
16k 64 .6 4,032 1.00 139,202,581
16k 80 .7 5,040 1.00 139,198,339
16k 96 .9 6,048 1.00 139,193,448
16k 112 1.0 7,056 1.00 139,188,446
16k 128 1.1 8,064 1.00 139,183,808
16k 144 1.3 9,072 1.00 139,179,598
16k 160 1.4 10,080 1.00 139,175,656
16k 176 1.6 11,088 1.00 139,170,607
16k 192 1.7 12,096 1.00 139,166,491
16k 208 1.9 13,104 1.00 139,162,487
16k 224 2.0 14,112 1.00 139,158,197
16k 240 2.1 15,120 1.00 139,153,797
16k 256 2.3 16,128 1.00 139,149,365
16k 272 2.4 17,136 1.00 139,144,252
16k 288 2.6 18,144 1.00 139,140,121
16k 304 2.7 19,152 1.00 139,135,435
16k 320 2.9 20,160 1.00 139,130,845
Buffer wait Statistics for DB: DB Instance: DB Snaps: 12 -13
-> ordered by wait time desc, waits desc
Tot Wait Avg
Class Waits Time (s) Time (ms)
data block 19,912 8 0
undo header 343 0 0
segment header 34 0 0
undo block 1 0 0
Enqueue activity for DB: DB Instance: DB Snaps: 12 -13
-> Enqueue stats gathered prior to 9i should not be compared with 9i data
-> ordered by Wait Time desc, Waits desc
Avg Wt Wait
Eq Requests Succ Gets Failed Gets Waits Time (ms) Time (s)
TM 981,781 981,773 0 7 1,365.43 10
TX 983,944 983,906 0 412 .59 0
HW 4,645 4,645 0 32 .09 0
Rollback Segment Stats for DB: DB Instance: DB Snaps: 12 -13
->A high value for "Pct Waits" suggests more rollback segments may be required
->RBS stats may not be accurate between begin and end snaps when using Auto Undo
managment, as RBS may be dynamically created and dropped as needed
Trans Table Pct Undo Bytes
RBS No Gets Waits Written Wraps Shrinks Extends
0 155.0 0.00 0 0 0 0
1 202,561.0 0.00 31,178,710 40 2 3
2 191,044.0 0.00 30,067,156 23 2 6
3 195,891.0 0.00 30,470,548 39 1 3
4 203,928.0 0.00 31,822,638 38 2 5
5 196,386.0 0.00 -4,264,350,168 38 1 3
6 204,125.0 0.00 32,081,200 24 1 7
7 192,169.0 0.00 33,732,012 45 3 6
8 195,819.0 0.00 30,503,550 40 2 2
9 202,905.0 0.00 31,595,438 40 2 4
10 195,796.0 0.00 30,566,652 29 4 9
Rollback Segment Storage for DB: DB Instance: DB Snaps: 12 -13
->Optimal Size should be larger than Avg Active
RBS No Segment Size Avg Active Optimal Size Maximum Size
0 385,024 0 385,024
1 12,705,792 944,176 2,213,732,352
2 11,657,216 1,548,937 2,214,715,392
3 13,754,368 832,465 243,392,512
4 13,754,368 946,902 235,069,440
5 12,705,792 964,352 2,195,374,080
6 20,045,824 1,232,438 2,416,041,984
7 12,705,792 977,490 3,822,182,400
8 10,608,640 875,068 243,392,512
9 11,657,216 878,119 243,392,512
10 18,997,248 1,034,104 2,281,889,792
Undo Segment Summary for DB: DB Instance: DB Snaps: 12 -13
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Undo Num Max Qry Max Tx Snapshot Out of uS/uR/uU/
TS# Blocks Trans Len (s) Concurcy Too Old Space eS/eR/eU
1 44,441 ########## 47 2 0 0 0/0/0/0/0/0
Undo Segment Stats for DB: DB Instance: DB Snaps: 12 -13
-> ordered by Time desc
Undo Num Max Qry Max Tx Snap Out of uS/uR/uU/
End Time Blocks Trans Len (s) Concy Too Old Space eS/eR/eU
28-Jun 11:56 7,111 ######## 47 1 0 0 0/0/0/0/0/0
28-Jun 11:46 10,782 ######## 18 2 0 0 0/0/0/0/0/0
28-Jun 11:36 6,170 ######## 42 1 0 0 0/0/0/0/0/0
28-Jun 11:26 4,966 ######## 13 1 0 0 0/0/0/0/0/0
28-Jun 11:16 6,602 ######## 40 1 0 0 0/0/0/0/0/0
28-Jun 11:06 8,810 ######## 10 1 0 0 0/0/0/0/0/0
Latch Activity for DB: DB Instance: DB Snaps: 12 -13
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
active checkpoint queue 9,585 0.0 0.0 0 0
alert log latch 158 0.0 0 0
archive control 220 0.0 0 0
archive process latch 220 0.5 1.0 0 0
cache buffer handles 264,718 0.0 0.0 0 0
cache buffers chains 416,051,175 0.0 0.0 4 401,018 0.0
cache buffers lru chain 1,285,963 0.0 0.0 0 1,206,550 0.0
channel handle pool latc 4,927 0.0 0 0
channel operations paren 10,788 0.0 0 0
checkpoint queue latch 528,319 0.0 0.0 0 69,506 0.0
child cursor hash table 35,371 0.0 0 0
Consistent RBA 854,833 0.0 0.0 0 0
dml lock allocation 1,963,007 0.9 0.0 0 0
dummy allocation 4,995 0.0 0 0
enqueue hash chains 4,014,593 0.5 0.0 0 0
enqueues 94,666 0.0 0.0 0 0
event group latch 2,340 0.0 0 0
FAL request queue 72 0.0 0 0
FIB s.o chain latch 310 0.0 0 0
FOB s.o list latch 6,769 0.0 0 0
global tx hash mapping 10,388 0.0 0 0
hash table column usage 16 0.0 0 479 0.0
job workq parent latch 0 0 316 0.0
job_queue_processes para 116 0.0 0 0
ktm global data 200 0.0 0 0
lgwr LWN SCN 855,008 0.0 0.0 0 0
library cache 5,836,900 0.4 0.0 0 8,926 0.6
library cache load lock 468 0.0 0 0
library cache pin 3,510,695 0.0 0.0 0 0
library cache pin alloca 1,402,523 0.0 0.0 0 0
list of block allocation 6,115 0.0 0 0
loader state object free 620 0.0 0 0
message pool operations 262 0.0 0 0
messages 2,664,950 0.4 0.0 0 0
mostly latch-free SCN 856,000 0.1 0.0 0 0
multiblock read objects 3,184 0.0 0 0
ncodef allocation latch 57 0.0 0 0
object stats modificatio 8 0.0 0 0
post/wait queue 6,183 0.0 0 3,082 0.0
process allocation 4,677 0.0 0 2,340 0.0
process group creation 4,677 0.0 0 0
redo allocation 4,784,936 0.5 0.0 0 0
redo copy 0 0 3,081,261 0.3
redo writing 2,576,299 0.0 0.2 0 0
row cache enqueue latch 3,017,144 0.0 0.0 0 0
row cache objects 5,049,552 0.8 0.0 0 92 0.0
sequence cache 984,824 0.0 0.1 0 0
session allocation 110,417 0.0 0.0 0 0
session idle bit 205,319 0.0 0 0
session switching 57 0.0 0 0
Latch Activity for DB: DB Instance: DB Snaps: 12 -13
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
session timer 1,204 0.0 0 0
shared pool 2,409,725 0.1 0.1 0 0
simulator hash latch 7,439,429 0.0 0.0 0 0
simulator lru latch 202 0.0 0 128,961 0.2
sort extent pool 1,053 0.0 0 0
SQL memory manager worka 67 0.0 0 0
temp lob duration state 187 0.0 0 0
transaction allocation 7,290 0.0 0 0
transaction branch alloc 5,668 0.0 0 0
undo global data 3,002,808 0.4 0.0 0 0
user lock 8,642 0.0 0 0
Latch Sleep breakdown for DB: DB Instance: DB Snaps: 12 -13
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
cache buffers chains 416,051,175 197,296 750 196776/298/2
15/7/0
row cache objects 5,049,552 42,368 38 42330/38/0/0
/0
redo allocation 4,784,936 24,766 77 24697/61/8/0
/0
library cache 5,836,900 23,477 276 23207/264/6/
0/0
enqueue hash chains 4,014,593 21,061 26 21035/26/0/0
/0
dml lock allocation 1,963,007 17,887 16 17872/14/1/0
/0
undo global data 3,002,808 12,350 8 12342/8/0/0/
0
messages 2,664,950 10,131 5 10126/5/0/0/
0
shared pool 2,409,725 1,362 189 1175/185/2/0
/0
row cache enqueue latch 3,017,144 470 7 463/7/0/0/0
mostly latch-free SCN 856,000 434 1 433/1/0/0/0
library cache pin 3,510,695 345 4 341/4/0/0/0
sequence cache 984,824 53 4 49/4/0/0/0
library cache pin allocati 1,402,523 35 1 34/1/0/0/0
redo writing 2,576,299 5 1 4/1/0/0/0
archive process latch 220 1 1 0/1/0/0/0
Latch Miss Sources for DB: DB Instance: DB Snaps: 12 -13
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
archive process latch kcrrpa 0 1 0
cache buffers chains kcbgtcr: fast path 0 346 188
cache buffers chains kcbgtcr: kslbegin excl 0 163 239
cache buffers chains kcbrls: kslbegin 0 86 170
cache buffers chains kcbget: pin buffer 0 53 49
cache buffers chains kcbgcur: kslbegin 0 44 20
cache buffers chains kcbnlc 0 38 22
cache buffers chains kcbget: exchange 0 8 16
cache buffers chains kcbchg: kslbegin: call CR 0 3 21
cache buffers chains kcbget: exchange rls 0 3 2
cache buffers chains kcbnew 0 3 0
cache buffers chains kcbbxsv 0 2 0
cache buffers chains kcbchg: kslbegin: bufs not 0 1 23
dml lock allocation ktaiam 0 13 1
dml lock allocation ktaidm 0 3 15
enqueue hash chains ksqgtl3 0 22 2
enqueue hash chains ksqrcl 0 4 24
library cache kglic 0 55 4
library cache kglhdgn: child: 0 42 86
library cache kglobpn: child: 0 26 32
library cache kglpndl: child: after proc 0 14 0
library cache kglpndl: child: before pro 0 13 73
library cache kglpin: child: heap proces 0 12 29
library cache kgllkdl: child: cleanup 0 11 4
library cache kglupc: child 0 4 7
library cache kgldti: 2child 0 2 4
library cache kglpnp: child 0 1 4
library cache pin kglpnal: child: alloc spac 0 3 3
library cache pin kglpndl 0 1 1
library cache pin alloca kglpnal 0 1 0
messages ksaamb: after wakeup 0 3 2
messages ksarcv 0 2 2
mostly latch-free SCN kcslcu3 0 1 1
redo allocation kcrfwr 0 74 8
redo allocation kcrfwi: more space 0 -
Dear all,
In st04 I see Redo log wait is this a problem. Please suggest how to solve it
Please find the details.
Size (kB) 14,352
Entries 42,123,046
Allocation retries 9,103
Alloc fault rate(%) 0.0
Redo log wait (s) 486
Log files (in use) 8 ( 8 )
DB_INST_ID Instance ID 1
DB_INSTANCE DB instance name prd
DB_NODE Database node A
DB_RELEASE Database release 10.2.0.4.0
DB_SYS_TIMESTAMP Day, Time 06.04.2010 13:07:10
DB_SYSDATE DB System date 20100406
DB_SYSTIME DB System time 130710
DB_STARTUP_TIMESTAMP Start up at 22.03.2010 03:51:02
DB_STARTDATE DB Startup date 20100322
DB_STARTTIME DB Startup time 35102
DB_ELAPSED Seconds since start 1329368
DB_SNAPDIFF Sec. btw. snapshots 1329368
DATABUFFERSIZE Size (kB) 3784704
DBUFF_QUALITY Quality (%) 96.3
DBUFF_LOGREADS Logical reads 5615573538
DBUFF_PHYSREADS Physical reads 207302988
DBUFF_PHYSWRITES Physical writes 7613263
DBUFF_BUSYWAITS Buffer busy waits 878188
DBUFF_WAITTIME Buffer wait time (s) 3583
SHPL_SIZE Size (kB) 1261568
SHPL_CAQUAL DD-cache Quality (%) 95.1
SHPL_GETRATIO SQL area getratio(%) 98.4
SHPL_PINRATIO SQL area pinratio(%) 99.9
SHPL_RELOADSPINS SQLA.Reloads/pins(%) 0.0042
LGBF_SIZE Size (kB) 14352
LGBF_ENTRIES Entries 42123046
LGBF_ALLORETR Allocation retries 9103
LGBF_ALLOFRAT Alloc fault rate(%) 0
LGBF_REDLGWT Redo log wait (s) 486
LGBF_LOGFILES Log files 8
LGBF_LOGFUSE Log files (in use) 8
CLL_USERCALLS User calls 171977181
CLL_USERCOMM User commits 1113161
CLL_USERROLLB User rollbacks 34886
CLL_RECURSIVE Recursive calls 36654755
CLL_PARSECNT Parse count 10131732
CLL_USR_PER_RCCLL User/recursive calls 4.7
CLL_RDS_PER_UCLL Log.Reads/User Calls 32.7
TIMS_BUSYWT Busy wait time (s) 389991
TIMS_CPUTIME CPU time session (s) 134540
TIMS_TIM_PER_UCLL Time/User call (ms) 3
TIMS_SESS_BUSY Sessions busy (%) 0.94
TIMS_CPUUSAGE CPU usage (%) 2.53
TIMS_CPUCOUNT Number of CPUs 4
RDLG_WRITES Redo writes 1472363
RDLG_OSBLCKWRT OS blocks written 54971892
RDLG_LTCHTIM Latching time (s) 19
RDLG_WRTTIM Redo write time (s) 2376
RDLG_MBWRITTEN MB written 25627
TABSF_SHTABSCAN Short table scans 12046230
TABSF_LGTABSCAN Long table scans 6059
TABSF_FBYROWID Table fetch by rowid 1479714431
TABSF_FBYCONTROW Fetch by contin. row 2266031
SORT_MEMORY Sorts (memory) 3236898
SORT_DISK Sorts (disk) 89
SORT_ROWS Sorts (rows) 5772889843
SORT_WAEXOPT WA exec. optim. mode 1791746
SORT_WAEXONEP WA exec. one pass m. 93
SORT_WAEXMULTP WA exec. multipass m 0
IEFF_SOFTPARSE Soft parse ratio 0.9921
IEFF_INMEM_SORT In-memory sort ratio 1
IEFF_PARSTOEXEC Parse to exec. ratio 0.9385
IEFF_PARSCPUTOTOT Parse CPU to Total 0.9948
IEFF_PTCPU_PTELPS PTime CPU / PT elps. 0.1175
Regards,
KumarHi,
If the redo buffers are not large enough, the Oracle log-writer process waits for space to become available. This wait time becomes wait time for the end user. Hence this may cause perfromance problem at database end and hence need to be tuned.
The size of the redo log buffer is defined in the init.ora file using the 'LOG_BUFFER' parameter. The statistic 'redo log space requests' reflects the number of times a user process waits for space in the redo log buffer.
If the size of redo log buffer is not big enough causing this wait, recommendation is to increase the size of redo log buffer in such a way that the value of "redo log space requests" should be near to zero.
regards,
rakesh -
Hi,
I am running Oracle 11g on Red Hat with a data block size of 8K
I have question on the efficiency of redo logs
I have an application that is extracting data from one DB and updating a second DB based on this data (not a copy). I have the option of doing it as a single mass up date inserts such as the one below. My question is whether this is more efficient (purely in terms of redo log generation) then having discrete single row inserts. The reason for my question is that the current ETL process design is to drop the table and do a mass re-insert and there have been some concerns raised about the impact on redo log generation. I am of the belief that it will be efficient as there will only be one set of logs per block impacted.
Picking your brains and gaining from our knowledge is much appreciated.
Cheers,
Daryl
INSERT INTO TMP_DIM_EXCH_RT
(EXCH_WH_KEY,
EXCH_NAT_KEY,
EXCH_DATE, EXCH_RATE,
FROM_CURCY_CD,
TO_CURCY_CD,
EXCH_EFF_DATE,
EXCH_EFF_END_DATE,
EXCH_LAST_UPDATED_DATE)
VALUES
(1, 1, '28-AUG-2008', 109.49, 'USD', 'JPY', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
(2, 1, '28-AUG-2008', .54, 'USD', 'GBP', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
(3, 1, '28-AUG-2008', 1.05, 'USD', 'CAD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
(4, 1, '28-AUG-2008', .68, 'USD', 'EUR', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
(5, 1, '28-AUG-2008', 1.16, 'USD', 'AUD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
(6, 1, '28-AUG-2008', 7.81, 'USD', 'HKD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008');darylo wrote:
I have an application that is extracting data from one DB and updating a second DB based on this data (not a copy). I have the option of doing it as a single mass up date inserts such as the one below. My question is whether this is more efficient (purely in terms of redo log generation) then having discrete single row inserts. Single mass insert is much more efficient in terms of redo usage, and almost everything else.
{message:id=4337747}
Run1 - Single insert, Run2 - multiple inserts for same data.
Name Run1 Run2 Diff
STAT...Heap Segment Array Inse 554 13 -541
STAT...free buffer requested 222 909 687
STAT...redo subscn max counts 221 990 769
STAT...redo ordering marks 44 871 827
STAT...calls to kcmgas 46 873 827
STAT...free buffer inspected 133 982 849
LATCH.object queue header oper 625 2,624 1,999
LATCH.simulator hash latch 223 6,005 5,782
STAT...redo entries 1,258 100,643 99,385
STAT...HSC Heap Segment Block 555 100,014 99,459
STAT...session cursor cache hi 5 100,010 100,005
STAT...opened cursors cumulati 7 100,014 100,007
STAT...execute count 7 100,014 100,007
STAT...session logical reads 2,162 102,988 100,826
STAT...db block gets 1,853 102,780 100,927
STAT...db block gets from cach 1,853 102,780 100,927
STAT...recursive calls 33 101,092 101,059
STAT...db block changes 1,873 201,552 199,679
LATCH.cache buffers chains 7,176 507,892 500,716
STAT...undo change vector size 240,500 6,802,736 6,562,236
STAT...redo size 1,566,136 24,504,020 22,937,884 -
How do I manually archive 1 redo log at a time?
The database is configured in archive mode, but automatic archiving is turned off.
For both Oracle 901 and 920 on Windows, when I try to manually archive a single redo log, the database
archives as many logs as it can up to the log just before the current log:
For example:
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 NO INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 NO INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 NO INACTIVE 425088 28-MAR-05
4 1 17 512000 1 NO INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
6 rows selected.
SQL> alter system archive log next;
System altered.
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 YES INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 YES INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 YES INACTIVE 425088 28-MAR-05
4 1 17 512000 1 YES INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
See - instead of only 1 log being archive, 4 of them were. Oracle behaves the same way if I use the "sequence" option:
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 NO INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 NO INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 NO INACTIVE 425088 28-MAR-05
4 1 17 512000 1 NO INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
6 rows selected.
SQL> alter system archive log next;
System altered.
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 YES INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 YES INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 YES INACTIVE 425088 28-MAR-05
4 1 17 512000 1 YES INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
Is there some default system configuration property telling Oracle to archive as many logs as it can?
Thanks,
DGRThanks Yoann (and Syed Jaffar Jaffar Hussain too),
but I don't have a problem finding the group to archive or executing the alter system archive log command.
My problem is that Oracle doesn't work as I expect it.
This comes from the Oracle 9.2 online doc:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_23a.htm#2053642
"Specify SEQUENCE to manually archive the online redo log file group identified by the log sequence number integer in the specified thread."
This implies that Oracle will only archive the log group identified by the log sequence number I specify in the alter system archive log sequence statement. However, Oracle is archiving almost all of the log groups (see my first post for an example).
This appears to be a bug, unless there is some other system parameter that is configured (by default) to allow Oracle to archive as many log groups as possible.
As to the reason why - it is an application requirement. The Oracle db must be in archive mode, automatic archiving must be disabled and the application must control online redo log archiving.
DGR -
Select from .. as of - using archived redo logs - 10g
Hi,
I was under the impression I could issue a "Select from .. as of" statement back in time if I have the archived redo logs.
I've been searching for a while and cant find an answer.
My undo_management=AUTO, database is 10.2.0.1, the retention is the default of 900 seconds as I've never changed it.
I want to query a table as of 24 hours ago so I have all the archived redo logs from the last 48 hours in the correct directory
When is issue the following query
select * from supplier_codes AS OF TIMESTAMP
TO_TIMESTAMP('2009-08-11 10:01:00', 'YYYY-MM-DD HH24:MI:SS')
I get a snapshot to old ORA-01555 error. I guess that is because my retention is only 900 seconds but I thought the database should query the archive redo logs or have I got that totally wrong?!
My undo tablespace is set to AUTOEXTEND ON and MAXSIZE UNLIMITED so there should be no space issues
Any help would be greatly appreciated!
Thanks
RobertIf you want to go back 24 hours, you need to undo the changes...
See e.g. the app dev guide - fundamentals, chapter on Flashback features: [doc search|http://www.oracle.com/pls/db102/ranked?word=flashback&remark=federated_search]. -
How to disable write to redo log file in oracle7.3.4
in oracle 8, alter table no logged in redo log file like: alter table tablename nologging;
how to do this in oracle 7.3.4?
thanks.user652965 wrote:
Thanks very much for your help guys. I appreciate it. unfortunately none of these commands worked for me. I kept getting error on clearing logs that redo log is needed to perform recovery so it can't be cleared. So I ended up restoring from earlier snapshot of my db volume. Database is now open.
Thanks again for your input.And now, as a follow-up, at a minimum you should make sure that all redo log groups have at least 3 members. Then, if you lose a single redo log file, all you have to do is shutdown the db and copy one of the good members (of the same group as the lost member) over the lost member.
And as an additional follow-up, if you value your data you will run in archivelog mode and take regular backups of the database and archivelogs. If you fail to do this you are saying that your data is not worth saving. -
Is There a Way to Run a Redo log for a Single Tablespace?
I'm still fairly new to Oracle. I've been reading up on the architecture and I am getting the hang of it. Actually, I have 2 questions.
1) My first question is..."Is there a way to run the redo log file...but to specify something so that it only applies to a single tablespace and it's related files?"
So, in a situation where, for some reason, only a single dbf file has become corrupted, I only have to worry about replaying the log for those transactions that affect the tablespace associated with that file.
2) Also, I would like to know if there is a query I can run from iSQLPlus that would allow me to view the datafiles that are associated with a tablespace.
Thanks1) My first question is..."Is there a way to run the
redo log file...but to specify something so that it
only applies to a single tablespace and it's related
files?"
No You can't specify a redolog file to record the transaction entries for a particular tablespace.
In cas if a file gets corrupted.you need to apply all the archivelogs since the last backup plus the redologs to bring back the DB to consistent state.
>
2) Also, I would like to know if there is a query I
can run from iSQLPlus that would allow me to view the
datafiles that are associated with a tablespace.Select file_name,tablespace_name from dba_data_files will give you the
The above will give you the number of datafiles that a tablespace is made of.
In your case you have created the tablespace iwth one datafile.
Message was edited by:
Maran.E
Maybe you are looking for
-
How to delete songs from iPod?
I just got my replacement iPod today after my other one went kaputt. Everything works nicely and iTunes FINALLY recognizes my new Nano but.. ...how do I go about deleting songs and adding more to my iPod? When I plugged it in, the iTunes program auto
-
Two sided printing on an HP Officejet 6500 Wireless
When I print using two sided the printer cuts off the top half of the second side. It misses several lines of print. If I print the same document and do not make it two sided it prints properly. If I try to print 4 pages two sided - it will print the
-
Hi experts, the settings are done in sender JDBC channel is done as: Poll Interval: 1800 secs Availability Time: Daily: At 5:00:00 AM for 30 Minutes according to logic, the channel start polling at 5:00:00 and stop at 5:30:00. After that It will not
-
When I open iTunes I keep getting the same speech from the general meeting. How do I delete
When I open iTunes I keep getting the speech of the last general session of new Apple products. The talking is always in the background of any new music I try to play. How do I stopped the speech from always playing in iTunes?
-
Thanks to Klaus I was able to get my safari back up to warp 5. For the last several days it has been running quite slowly. A few days ago I tried the solution I saw posted by Mulder that had basically suggested the following 1. Quit Safari if it's ru