LOB-redo log content
Hi,
we have to take over an new applicaion in production.
We worry about the redo log volume, because we have much tables
whith LOB columns. So I want to know, which redo informations were
generated when an LOB will be inserted. I can't believe that the hole content
of the lob will be written to the redo log. I can't find a detailed description
in the manuals. Thanx for any explanation.
Hi
Oracle must write everything that change in the redo logs... otherwise how it will be possible to perform a recovery?
Only exception is, for some operations, when NOLOGGING is enabled. This "option" is usually set at table level, with LOBs you can set it at LOB level.
Chris
Similar Messages
-
Redo log content in NOARCHIVELOG mode
I have several servers running Oracle 9i databases in NOARCHIVELOG mode. I know this means that the online redo logs are not archived. I have been tracing the Oracle log writer with truss, and only see I/O going to the Oracle control files. I see no I/O going to the online redo logs. Can someone point me to the Oracle documentation that discusses exactly what gets written to the online redo log files while in NOARCHIVELOG mode? Thanks in advance for any assistance.
Try Oracle Concepts on
http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96524.pdf -
what do you mean by change vectors? what exactly do they contain?
Thank you for your replay.
checked out the link suggested and found the following content.
Online Redo Log Contents
Online redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database. For example, if you change a salary value in an employee table, you generate a redo record containing change vectors that describe changes to the data segment block for the table, the rollback segment data block, and the transaction table of the rollback segments.
Redo entries record data that you can use to reconstruct all changes made to the database, including the rollback segments.
but still i am not clear with this concept
does it contain address of blocks in data & undo segments or the unit by which the data is changing? -
LOB changes not wirtten in redo logs?
Hi there,
im just evaluating a one replikation software for syncronizing Oracle Databases. We use a 9.2 Database Standard Edition one.
I was told by a consultant, that LOB tables cannot be syncronized, because the changes are not written into the redologs. Is this true?
Thanks, SvenSven,
I belive there is no such restriction in Oracle.
consider the test case (Oracle 9.2.0.6 Ent.Edition, Windows 2003):
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.
SQL> create table test_lob(lob clob) ;
Table created.
SQL> alter system switch logfile;
System altered.
SQL> declare
2 l_lob CLOB ;
3 begin
4 insert into test_lob values (empty_clob()) returning lob into l_lob ;
5 dbms_lob.writeappend(l_lob, 4, 'test') ;
6 end ;
7 /
PL/SQL procedure successfully completed.
SQL> alter system switch logfile;
System altered.
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME =>'j:\orant\archive_logs\Arc1_731.arc', OPTIONS => DBMS_LOGMNR.NEW);
PL/SQL procedure successfully completed.
SQL>
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.
SQL> select scn, sql_redo from V$LOGMNR_CONTENTS;
returns
1101810606178,set transaction read write;
1101810606178,insert into "OBIDEM00"."TEST_LOB"("LOB") values (EMPTY_CLOB());
1101810606178,update "OBIDEM00"."TEST_LOB" set "LOB" = 'test' where "LOB" = EMPTY_CLOB() and ROWID = 'AAAV7JAABAAAacJAAA';
as you can see the LOB related statements are in the archived log, means they were written in redo log.
The question is how the LOB gets created in your case? If it is loaded through the direct-path load or you are using NOLOGGING option then it is possible that the LOB data would not appear in the redo log. In this case it is not the Oracle restriction, but the peculiarity of your application
Mike -
Usage of Redo Log Groups & Disk Contention
Hi,
I have a peculiar problem here.
I have the redo log groups/members configured in the following manner.Please note that the disks are in the A-B-A-B-A-B sequence for successive redo groups.
GROUP# MEMBER
11 /origlogA/log_g11m1.dbf
11 /mirrlogA/log_g11m2.dbf
12 /origlogB/log_g12m1.dbf
12 /mirrlogB/log_g12m2.dbf
13 /origlogA/log_g13m1.dbf
13 /mirrlogA/log_g13m2.dbf
14 /origlogB/log_g14m1.dbf
14 /mirrlogB/log_g14m2.dbf
15 /origlogA/log_g15m1.dbf
15 /mirrlogA/log_g15m2.dbf
16 /origlogB/log_g16m1.dbf
16 /mirrlogB/log_g16m2.dbf
17 /origlogA/log_g17m1.dbf
17 /mirrlogA/log_g17m2.dbf
18 /origlogB/log_g18m1.dbf
18 /mirrlogB/log_g18m2.dbf
19 /origlogA/log_g19m1.dbf
19 /mirrlogA/log_g19m2.dbf
20 /origlogB/log_g20m1.dbf
20 /mirrlogB/log_g20m2.dbf
21 /origlogA/log_g21m1.dbf
21 /mirrlogA/log_g21m2.dbf
22 /origlogB/log_g22m1.dbf
22 /mirrlogB/log_g22m2.dbf
23 /origlogA/log_g23m1.dbf
23 /mirrlogA/log_g23m2.dbf
24 /origlogB/log_g24m1.dbf
24 /mirrlogB/log_g24m2.dbf
But oracle uses these groups in a zig-zag manner(pls refer the list below).Here, after group# 15, it is group# 11 , which is used. And the members of these two groups are in the same set of disks ie; "/origlogA and /mirrlogA"
(Note:The following result is ordered by sequence #)
GROUP# SEQUENCE# MEMBER
16 263076 /origlogB/log_g16m1.dbf
16 263076 /mirrlogB/log_g16m2.dbf
17 263077 /origlogA/log_g17m1.dbf
17 263077 /mirrlogA/log_g17m2.dbf
18 263078 /origlogB/log_g18m1.dbf
18 263078 /mirrlogB/log_g18m2.dbf
19 263079 /origlogA/log_g19m1.dbf
19 263079 /mirrlogA/log_g19m2.dbf
20 263080 /origlogB/log_g20m1.dbf
20 263080 /mirrlogB/log_g20m2.dbf
21 263081 /origlogA/log_g21m1.dbf
21 263081 /mirrlogA/log_g21m2.dbf
22 263082 /origlogB/log_g22m1.dbf
22 263082 /mirrlogB/log_g22m2.dbf
23 263083 /origlogA/log_g23m1.dbf
23 263083 /mirrlogA/log_g23m2.dbf
24 263084 /origlogB/log_g24m1.dbf
24 263084 /mirrlogB/log_g24m2.dbf
13 263085 /origlogA/log_g13m1.dbf
13 263085 /mirrlogA/log_g13m2.dbf
14 263086 /origlogB/log_g14m1.dbf
14 263086 /mirrlogB/log_g14m2.dbf
15 263087 /origlogA/log_g15m1.dbf
15 263087 /mirrlogA/log_g15m2.dbf
11 263088 /origlogA/log_g11m1.dbf
11 263088 /mirrlogA/log_g11m2.dbf
12 263089 /origlogB/log_g12m1.dbf
12 263089 /mirrlogB/log_g12m2.dbf
Is there anyway, which we can force oracle to use the log groups in the right succession of log groups ? (like 11-12-13-14-15-16-17-18-19-20 etc.).
I want to make sure that there will be no chances of contention, due to the "archiving of the offline redo log & LGWR writing to the online redo log" happening on the same disk.
Thanks in advance,
DonHi,
There's no way to achieve what you're trying to do except:
1/ switch logfile till the current group is the last one.
2/ drop groups from 1 to (last - 2)
3/ create groups 1, 2, 3 (or 11, 12, 13, 14, ... don't care)
4/ Switch logfile Twice
5/ Alter system checkpoint
6/ Drop the former 2 or 3 remaining groups (19, 20, 21, ...)
7/ Recreate them.
But i'd like to point that having them go in order is perfectly useless.
And that you're a priori doing something dangerous. Having your log members on the same disk for the different groups. usually I'd choose to
. Put member 1 on disk 1
. Put member 2 on disk 2
. Increase the number of archiver processes
. ensure disk 1 and 2 are not RAID disk(click)
Regards,
Yoann. -
How can I check the contents of the redo log files?
I recently install 10g. We haven't used the db yet, but the redo logs are very active.
Is there a query to determine what activity is filling the redo log files?
Thank you for any help.
Take Care.
SThe Automatic Workload Repository takes snapshots of
statistics, via scheduled job, at hourly intervals.
This is probably responsible for some of the redo.
You can see if this feature is turned on by checking
the job queue (simply: select * from dba_jobs).
The default settings on AWR can be managed using the
DBMS_WORKLOAD_REPOSITORY built-in package.
Hope this helps.
Kailash. -
Need to understand when redo log files content is wrote to datafiles
Hi all
I have a question about the time when redo log files are wrote to the datafiles
supposing that the database is in NOARCHIVELOG mode, and all redo log files are filled, the official oracle database documentation says that: *a filled redo log file is available
after the changes recorded in it have been written to the datafiles* witch mean that we just need to have all the redo log files filled to "*commit*" changes to the database
Thanks for help
Edited by: rachid on Sep 26, 2012 5:05 PMrachid wrote:
the official oracle database documentation says that: a filled redo log file is available after the changes recorded in it have been written to the datafiles It helps if you include a URL to the page where you found this quote (if you were using the online html manuals).
The wording is poor and should be modified to something like:
<blockquote>
+"a filled online redo log file is available for re-use after all the data blocks that have been changed by change vectors recorded in the log file have been written to the data files"+
</blockquote>
Remember if a data block that is NOT an undo block has been changed by a transaction, then an UNDO block has been changed at the same time, and both change vectors will be in the redo log file. The redo log file cannot, therefore, be re-used until the data block and the associated UNDO block have been written to disc. The change to the data block can thus be rolled back (uncommitted changes can be written to data files) because the UNDO is also available on disc if needed.
If you find the manuals too fragmented to follow you may find that my book, Oracle Core, offers a narrative description that is easier to comprehend.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b> -
Best practice - online redo logs and virtualization
I have a 10.1.0.4 instance (soon to be migrated to 11gr2) running under Windows Server 2003.
We use a non-standard disk distribution scheme -
on the c: drive we have oracle_home as well as directories for control files and online redo logs.
on the d: drive we have datafiles
on the e: drive we have archive log files and another directory with online redo logs and another copy of control file
my question is this:
is it smart practice to have ANY online redo logs or control file on the same spindle with archive logs?
Our setup works fairly well but we are in the process of migrating the instance first to ESX server and SAN and then secondly to 11gtr2 64bit under server 2008 64 and when we bring up our instance on the VM for testing we find that benchmarking the ESX server (dual Xeon 3.4ghz with 48gb RAM running against FalconStor NSS SAN with 15k SAS disks over iSCSI) against the production physical server (dual Xeon 2.0ghz with 4gb RAM using direct attached SATA 7200rpm drives) we find that some processes run faster on the ESX box and some run 40-100% slower. Running Statspack seems to identify lots of physical read waits as well as some waits for redo and controlfiles.
Is it possible that in addition to any overhead introduced by ESX and iSCSI (we are running Jumbo Frames over 1gb) we may have contention because the archive logs are on the same "spindle" (virtual) as the online redo and control files?
We're looking at multiple avenues to bring the 2 servers in line from a performance standpoint - db configuration, memory allocation, possible move to 10gb network, possible move to SSD storage tray, possible application rewrites. But from the simplest low hanging fruit idea, if these files should not be on the same spindle thats an easy change to make and possibly eke out an improvement.
Ideas?
MikeHi,
"Old" Oracle standard is to use as many spindles as possible.
It looks to me, you have only 1 disk with several partitions on it ??
In my honest opinion you should anyway start by physically seperating OS from Oracle, so let the C: drive to the Windows OS
Take another physical seperate D: drive to install you application.
Use yet another set of physical drives, preferably in RAID10 setup, for your database and redo logs
And finally yet another disk for the archive logs.
We have recently configured a Windows 2008 server with an 11G Db, which pretty much follows the above setup.
All non RAID10 disks are RAID1 ( mirror ) and we even have some SSD's for hot tables and redo-logs.
The machine, or must I say the database, operates like a high speed train, very, very fast.
Ofcourse keep in mind the number of cores ( not only for licensing ) and the amount of memory.
Try to prevent the system from swapping, because that is a performance killer!
Edit: And even if you put a virtual layer in between, try to seperate the virtual disks as much as possible over physical disks
Success!
FJFranken
Edited by: fjfranken on 7-okt-2011 7:19 -
Physical standby database standby redo log problem
Hello
We have a physical standby database , I've created some standby redo log files but my problem is that they aren't used,
their status in v$stanby_log view is UNASSIGNED
and I see this message (ORA-16086: standby database does not contain available standby log files) in primary database alert_log file
while when I run "alter system switch logfile" in the primary database it transfer redo logs to the physsical standby database
and archive log file will be created in standby database
I've even recreated the standby redo log files and I added new ones to them but the problem wasn't solved
Do you know what is problem ?
elect group#,THREAD#,BYTES,STATUS from V$STANDBY_LOG;
group# THREAD# BYTES STATUS
1 0 524288000 UNASSIGNED
2 0 524288000 UNASSIGNED
3 0 524288000 UNASSIGNED
8 0 524288000 UNASSIGNED
9 0 524288000 UNASSIGNED
10 0 524288000 UNASSIGNED
select group#,THREAD#,BYTES,MEMBERS,STATUS from v$log;
group# THREAD# BYTES MEMBERS STATUS
4 1 524288000 2 CLEARING
7 1 524288000 2 CLEARING_CURRENT
6 1 524288000 2 CLEARING
5 1 524288000 2 CLEARING
thanksHello Anurag
Thank you for your reply
I have found some issue in the standby database alert_log too , in the standby database alert_log it has been written:
RFS[782]: Assigned to RFS process 3919
RFS[782]: Identified database type as 'physical standby'
Primary database is in MAXIMUM AVAILABILITY mode
Standby controlfile consistent with primary
Primary database is in MAXIMUM AVAILABILITY mode
Standby controlfile consistent with primary
RFS[782]: No standby redo logfiles selected (reason:6)
Sun Jan 31 13:59:43 2010
Errors in file /u01/app/oracle/admin/tehrep/udump/tehrep_rfs_3919.trc:
ORA-16086: standby database does not contain available standby log files
Sun Jan 31 13:59:48 2010
RFS[781]: Archived Log: '/disks/sda/tehrep/archivelogs/1_6516_670414641.dbf'
Sun Jan 31 13:59:50 2010
and the context "/u01/app/oracle/admin/tehrep/udump/tehrep_rfs_3919.trc" is below :
+/u01/app/oracle/admin/tehrep/udump/tehrep_rfs_3919.trc+
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
System name: Linux
Node name: linserver2.com
Release: 2.6.9-42.ELsmp
Version: #1 SMP Wed Jul 12 23:27:17 EDT 2006
Machine: i686
Instance name: tehrep
Redo thread mounted by this instance: 1
Oracle process number: 58
Unix process pid: 3919, image: [email protected]
*** SERVICE NAME:() 2010-01-31 13:59:43.865
*** SESSION ID:(109.1225) 2010-01-31 13:59:43.865
KCRRFLAS
KCRRSNPS
No space in recovery area for active standby redo logs
The primary database is operating in MAXIMUM PROTECTION
or MAXIMUM AVAILABILITY mode, and the standby database
does not contain adequate disk space in the recovery area
to safely archive the contents of the standby redo logfiles.
ORA-16086: standby database does not contain available standby log files
when I saw this line "No space in recovery area for active standby redo logs" I thought that STANDBY_ARCHIVE_DEST parameter points where that there is no enough space , but when I consider I found out that points a directory on disk a "sda" that has enough space , I don't know what that means
by the way, at below I've written a section of the primary database alert_log context and "lgwr" trace file around Sun Jan 31 13:30:34 2010
alert_log :
ORA-16086: standby database does not contain available standby log files
Sun Jan 31 13:30:34 2010
LGWR: Failed to archive log 7 thread 1 sequence 6512 (16086)
Thread 1 advanced to log sequence 6512
Current log# 7 seq# 6512 mem# 0: /disks/sdb/tehrep/redo71.log
Current log# 7 seq# 6512 mem# 1: /disks/sdd/tehrep/redo72.log
LNSc started with pid=53, OS id=11451
Sun Jan 31 13:36:34 2010
Errors in file /u01/app/oracle/admin/tehrep/bdump/tehrep_lgwr_3692.trc:
ORA-16086: standby database does not contain available standby log files
Sun Jan 31 13:36:34 2010
LGWR: Failed to archive log 5 thread 1 sequence 6513 (16086)
Thread 1 advanced to log sequence 6513
Current log# 5 seq# 6513 mem# 0: /disks/sdb/tehrep/redo51.log
Current log# 5 seq# 6513 mem# 1: /disks/sdd/tehrep/redo52.log
*/u01/app/oracle/admin/tehrep/bdump/tehrep_lgwr_3692.trc file :*
Error 16086 creating standby archive log file at host '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=linserver2.com
+)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=tehrep_XPT.com)(INSTANCE_NAME=tehrep)(SERVER=dedicated)))'+
*** 2010-01-31 13:30:34.712 60679 kcrr.c
LGWR: Attempting destination LOG_ARCHIVE_DEST_3 network reconnect (16086)
*** 2010-01-31 13:30:34.712 60679 kcrr.c
LGWR: Destination LOG_ARCHIVE_DEST_3 network reconnect abandoned
ORA-16086: standby database does not contain available standby log files
*** 2010-01-31 13:30:34.712 60679 kcrr.c
LGWR: Error 16086 creating archivelog file '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=linserver2.com)(PORT=1521
+)))(CONNECT_DATA=(SERVICE_NAME=tehrep_XPT.com)(INSTANCE_NAME=tehrep)(SERVER=dedicated)))'+
*** 2010-01-31 13:30:34.712 58941 kcrr.c
kcrrfail: dest:3 err:16086 force:0 blast:1
Receiving message from LNSc
*** 2010-01-31 13:30:34.718 55444 kcrr.c
Making upidhs request to LNSc (ocis 0x0xb648db48). Begin time is <01/31/2010 13:30:30> and NET_TIMEOUT <180> seconds
NetServer pid:11196
*** 2010-01-31 13:30:38.718 55616 kcrr.c
upidhs done status 0
*** 2010-01-31 13:36:31.062
LGWR: Archivelog for thread 1 sequence 6513 will NOT be compressed
*** 2010-01-31 13:36:31.062 53681 kcrr.c
+Initializing NetServer[LNSc] for dest=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=linserver2.com)(PORT=1521)))(CO+
NNECT_DATA=(SERVICE_NAME=tehrep_XPT.com)(INSTANCE_NAME=tehrep)(SERVER=dedicated))) mode SYNC
LNSc is not running anymore.
New SYNC LNSc needs to be started
Waiting for subscriber count on LGWR-LNSc channel to go to zero
Subscriber count went to zero - time now is <01/31/2010 13:36:31>
Starting LNSc ...
Waiting for LNSc to initialize itself
*** 2010-01-31 13:36:34.116 53972 kcrr.c
+Netserver LNSc [pid 11451] for mode SYNC has been initialized+
Performing a channel reset to ignore previous responses
+Successfully started LNSc [pid 11451] for dest (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=linserver2.com)(PORT=1+
+521)))(CONNECT_DATA=(SERVICE_NAME=tehrep_XPT.com)(INSTANCE_NAME=tehrep)(SERVER=dedicated))) mode SYNC ocis=0x0xb648db48+
*** 2010-01-31 13:36:34.116 54475 kcrr.c
+Making upiahm request to LNSc [pid 11451]: Begin Time is <01/31/2010 13:36:31>. NET_TIMEOUT = <180> seconds+
Waiting for LNSc to respond to upiahm
*** 2010-01-31 13:36:34.266 54639 kcrr.c
upiahm connect done status is 0
Receiving message from LNSc
Receiving message from LNSc
Destination LOG_ARCHIVE_DEST_3 is in STANDBY RESYNCHRONIZATION mode
Receiving message from LNSc -
How to reduce excessive redo log generation in Oracle 10G
Hi All,
Please let me know is there any way to reduce excessive redo log generation in Oracle DB 10.2.0.3
previously per day there is only 15 Archive log files are generating but now a days it is increased to 40 to 45
below is the size of redo log file members:
L.BYTES/1024/1024 MEMBER
200 /u05/applprod/prdnlog/redolog1a.dbf
200 /u06/applprod/prdnlog/redolog1b.dbf
200 /u05/applprod/prdnlog/redolog2a.dbf
200 /u06/applprod/prdnlog/redolog2b.dbf
200 /u05/applprod/prdnlog/redolog3a.dbf
200 /u06/applprod/prdnlog/redolog3b.dbf
here is the some content of alert message for your reference how frequent log switch is occuring:
Beginning log switch checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
Thread 1 advanced to log sequence 17439
Current log# 3 seq# 17439 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
Current log# 3 seq# 17439 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
Tue Jul 13 14:46:17 2010
Completed checkpoint up to RBA [0x441f.2.10], SCN: 4871839752
Tue Jul 13 14:46:38 2010
Beginning log switch checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
Thread 1 advanced to log sequence 17440
Current log# 1 seq# 17440 mem# 0: /u05/applprod/prdnlog/redolog1a.dbf
Current log# 1 seq# 17440 mem# 1: /u06/applprod/prdnlog/redolog1b.dbf
Tue Jul 13 14:46:52 2010
Completed checkpoint up to RBA [0x4420.2.10], SCN: 4871846489
Tue Jul 13 14:53:33 2010
Beginning log switch checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
Thread 1 advanced to log sequence 17441
Current log# 2 seq# 17441 mem# 0: /u05/applprod/prdnlog/redolog2a.dbf
Current log# 2 seq# 17441 mem# 1: /u06/applprod/prdnlog/redolog2b.dbf
Tue Jul 13 14:53:37 2010
Completed checkpoint up to RBA [0x4421.2.10], SCN: 4871897354
Tue Jul 13 14:55:37 2010
Incremental checkpoint up to RBA [0x4421.4b45c.0], current log tail at RBA [0x4421.4b5c5.0]
Tue Jul 13 15:15:37 2010
Incremental checkpoint up to RBA [0x4421.4d0c1.0], current log tail at RBA [0x4421.4d377.0]
Tue Jul 13 15:35:38 2010
Incremental checkpoint up to RBA [0x4421.545e2.0], current log tail at RBA [0x4421.54ad9.0]
Tue Jul 13 15:55:39 2010
Incremental checkpoint up to RBA [0x4421.55eda.0], current log tail at RBA [0x4421.56aa5.0]
Tue Jul 13 16:15:41 2010
Incremental checkpoint up to RBA [0x4421.58bc6.0], current log tail at RBA [0x4421.596de.0]
Tue Jul 13 16:35:41 2010
Incremental checkpoint up to RBA [0x4421.5a7ae.0], current log tail at RBA [0x4421.5aae2.0]
Tue Jul 13 16:42:28 2010
Beginning log switch checkpoint up to RBA [0x4422.2.10], SCN: 4872672366
Thread 1 advanced to log sequence 17442
Current log# 3 seq# 17442 mem# 0: /u05/applprod/prdnlog/redolog3a.dbf
Current log# 3 seq# 17442 mem# 1: /u06/applprod/prdnlog/redolog3b.dbf
Thanks in advancehi,
Use the below script to find out at what hour the generation of archives are more and in the hour check for eg. if MV's are running...or any programs where delete * from table is going on..
L
1 select
2 to_char(first_time,'DD-MM-YY') day,
3 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
4 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
5 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
6 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
7 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
8 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
9 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
10 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
11 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
12 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
13 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
14 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
15 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
16 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
17 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
18 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
19 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
20 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
21 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
22 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
23 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
24 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
25 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
26 to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
27 COUNT(*) TOT
28 from v$log_history
29 group by to_char(first_time,'DD-MM-YY')
30 order by daythanks,
baskar.l -
Redo log files in case of NOARCHIVELOG Mode.
Question is related with the oracle architure..
database requires a minimum of two redo log files to guarantee that one is always available for writing while the other is being archived, this sounds perfect when DB is running in ARCHIVELOG mode but at the same time it also forces database to have 2 redo log files even when the DB is running in NOARCHIVELOG mode?
Any particular reason..
I would look for reasons not answers on what redo log is and what information it holds etc..pgoel wrote:
If you had only one file all further changes would have to stop until all changed data blocks had been written to disc. By insisting on a minimum of two log files Oracle can allow the log writer to fill the second log file as the database writer writes out the dirty blocks covered by changes desrcibed in the first log file.What about having a big size redo log file instead of two? Checkppoint being initiated when the redo log file is half filled
I mean, understand the logic, two is better, even best.. but still not getting convinced. I am just trying to think, can not Oracle work with 1 group with 1 big file..specailly when Noarch mode ? Having
Edited by: pgoel on Mar 12, 2011 7:27 PMNo, you still didn't understand and I am not sure how else we can say it. Okay, think about log groups as two buckets which are used to fill the redo content. The LGWR is filling one bucket at a time till it can't accept any more content. Once filled, rather than spilling the content out, it jumps over to the second bucket. Now, taking your statement of big sized redo, Pgoel, it doesn't matter how big sized bucket you bring in, eventually it will fill in. Its not possible that you wont be able to fill it , it just would take more longer than the normal timings. Thats all. So in any case , you need the second bucket. And I am not sure that why you are struck on the archivelog mode. I hope you do understand that its an optional mode. This means, this may or may not be there. If there, its a good thing that before doing the flush of the redo content, Oracle would be able to save the contents in the archived (think about the name, it's very meaning is to preserve, to archive) log file. If not, the redo content wuold be lost and there won't be any record of transactions leaving you to reenter the lost transactions. Its a meaningless point on which you are struck i.e. the archive or no-archivelog mode. Be it whatever, Oracle would still need minimum of the two log groups.
HTH
Aman.... -
ORA-00333: redo log read error block 283081 count 8192
I am starting the database..it mount but after that it gives me this error
ORA-00333: redo log read error block 283081 count 8192
Below are the contents of alert Log.._Please advice_
Completed: ALTER DATABASE MOUNT
Tue Jan 20 10:24:45 2009
ALTER DATABASE OPEN
Tue Jan 20 10:24:45 2009
Beginning crash recovery of 1 threads
parallel recovery started with 2 processes
Tue Jan 20 10:24:45 2009
Started redo scan
Tue Jan 20 10:25:00 2009
Errors in file /d01/oracle/PROD1/db/tech_st/10.2.0/admin/PROD1_prod1/udump/prod1_ora_32356.trc:
ORA-00333: redo log read error block 283081 count 8192
ORA-00312: online log 2 thread 1: '/d01/oracle/PROD1/db/apps_st/data/log02a.dbf'
ORA-27072: File I/O error
Linux-x86_64 Error: 2: No such file or directory
Additional information: 4
Additional information: 283081
Additional information: 257536
Tue Jan 20 10:25:16 2009
Errors in file /d01/oracle/PROD1/db/tech_st/10.2.0/admin/PROD1_prod1/udump/prod1_ora_32356.trc:
ORA-00333: redo log read error block 283081 count 8192
ORA-00312: online log 2 thread 1: '/d01/oracle/PROD1/db/apps_st/data/log02a.dbf'
ORA-27091: unable to queue I/O
ORA-27072: File I/O error
Linux-x86_64 Error: 2: No such file or directory
Additional information: 4
Additional information: 283081
Additional information: 257536
Tue Jan 20 10:25:31 2009I did what Prabhu told me..But i recovered using backup controlfile and when i was asked to apply logs..i applied my oldest logs and it worked
Like i have two groups with two members each
I applied log1a.dbf and it said media recovery completee
I opened the database But then it started giving me errors for undo tablespace
I made another undo tablespace..tried dropping the old one but it did not permitted as it had some segments with status "needs recovery"
Than i added this parameter in pfile with the correupted segment and than tried to drop the segment but it still did not permit
corruptedrollback_segments =(corrupted_undo segment_name)
Next what i did was I mounted the database, ran another session of media recovery and opened the databse using resetlogs
Than i dropped the old undo and it went through successfully..
If you think anything i did wrong than please advice..
I hope this action plans helps you in case you come across same errors some day
I would be very thankful if you can refer me a document of recovery which covers all kinds of recoveries and scenarios and commands too
Thanks alot -
DB Cache Full or Redo Log Full?
Is there any way that Oracle can write to datafiles in the middle of a transaction?
Iam reading, processing and writing very large sized lobs which gives error that "no free buffers available in buffer pool".
When in lobs, a lob is not written until the whole tranaction finishes - but in my case the lob size is large than the size of the data buffer cache.
The error is "ORA-00379: no free buffers available in buffer pool DEFAULT for block size 8K"
Exact question I would like to know now is that which buffer is full; data_buffer_cache or the redo log buffer?
If data_buffer cache, then is there a mechanism which allows to write data to dtafiles in the middle of a transaction as i have to do processing with lobs - which are 3 to 4 times the size of the db cache size.
I am referring to the same problem outlined in an earlier thread.
ThanksIs there any way that Oracle can write to datafiles
in the middle of a transaction?
r.- Oracle writes to the datafiles only commited transactions according to some elements
Iam reading, processing and writing very large sized
lobs which gives error that "no free buffers
available in buffer pool".
r.- You have to increase the size of the buffer Pool
When in lobs, a lob is not written until the whole
tranaction finishes - but in my case the lob size is
large than the size of the data buffer cache.
The error is "ORA-00379: no free buffers available in
buffer pool DEFAULT for block size 8K"
Exact question I would like to know now is that which
buffer is full; data_buffer_cache or the redo log
buffer?
data_buffer_cache. In what version you are ?
If data_buffer cache, then is there a mechanism which
allows to write data to dtafiles in the middle of a
transaction as i have to do processing with lobs -
which are 3 to 4 times the size of the db cache
size.
r.- Oracle does not write to the datafiles in that way
I am referring to the same problem outlined in an
earlier thread.
Thanks Joel Pérez
http://www.oracle.com/technology/experts -
What if all the redo logs of a database are lost
I want to know how a database can be recovered if all the three redo logs are lost at the same time.
Thanks,
Prabhath.You will be able to find the detail procedures of this kind of recover here:
Backup and Recovery Concepts Contents / Search / Index / PDF
Backup and Recovery Documentation Online Roadmap Contents / Search / /
Recovery Manager Quick Reference Contents / Search / / PDF
Recovery Manager Reference Contents / Search / Index / PDF
Recovery Manager User's Guide Contents / Search / Index / PDF
http://otn.oracle.com/pls/db92/db92.docindex?remark=homepage
if you are making backups to the database using RMAN the recover is easier.
Joel Pérez -
[DG Physical] ORA-00368: checksum error in redo log block
Hi all,
I'm building a DR solution with 1 primary & 2 DR site (Physical).
All DBs use Oracle 10.2.0.3.0 on Solaris 64bit.
The first one ran fine for some days (6), then I installed the 2nd. After restoring the DB (DUPLICATE TARGET DATABASE FOR STANDBY) & ready to apply redo. The DB fetched missing arc gaps & I got the following error:
==================
Media Recovery Log /global/u04/recovery/billhcm/archive/2_32544_653998293.dbf
Errors with log /global/u04/recovery/billhcm/archive/2_32544_653998293.dbf
MRP0: Detected read corruption! Retry recovery once log is re-fetched...
Wed Jan 27 21:46:25 2010
Errors in file /u01/oracle/admin/billhcm/bdump/billhcm1_mrp0_12606.trc:
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 1175553 change 8236247256146 time 01/27/2010 18:33:51
ORA-00334: archived log: '/global/u04/recovery/billhcm/archive/1_47258_653998293.dbf'
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 8236247255373
===================
I see that may be RFS get the file incorrectly so I ftp to get this file & continue the apply, it pass. Comparison the RFS file & ftp is difference. At that time, I think that something wrong with the RFS because the content of arc is not right. (I used BACKUP VALIDATE ARCHIVELOG SEQUENCE BETWEEN N1 AND N2 THREAD X to check all arcs the RFS fetched, there was corrupt in every file);
I restore the DR DB again & apply incremental backup from the primary, now it run well. I don't know what's happening as I did the same procedures for all DR DB.
Yesterday night, I have to stop & restart DR site 1. Today, I check and it got the same error as the 2nd site, with corrupted redo. I try to delete the arcs, & let RFS to reget it, but the files were corrupt too.
If this continue to happen with the 2nd site again, that'll be a big problem.
The DR site 1 & Primary is linked by a GB switch, site 2 by a 155Mbps connection (far enough for my db load at about 1.5MB/s avg apply rate).
I seach Oracle support (metalink) but no luck, there is a case but it mentions max_connection>1 (mine is default =1)
Can someone show me how to troubleshooting/debug/trace this problem.
That would be a great help!
Thank you very much.This (Replication) is the wrong forum for your posting.
Please post to the "Database - General" forum at
General Database Discussions
But, first, log an SR with Oracle Support.
Hemant K Chitale
Maybe you are looking for
-
I have been trying to open a browser window with Mozilla Firefox 6.0.2 (x86 en-US). It won't run, however, when I try to uninstall the program, it says I have to close firefox first. Win 7 task manager does not show the program as running. I tried ag
-
How can I installed Photoshop elements in windows eight ?
When I try to install Photoshop elements in a new computer with windows 8, I get an error message saying I need explore four or better. I have explored 10 installed.
-
i am using logic 9 and line 6 toneport gx for recording my guitar. on the previous version of OSx everything worked perfect, but now, after updating to mountain lion, i hear noise after i play something. the settings were: input: toneport, output: co
-
Odd System Hangs [SOLVED]
I have a cleanly installed Arch (x86) system on a Dell Vostro 1500 Intel 2.5 GHz Core 2 Braodcom Wireless (ndiswrapper) 4GB Ram 250 GB Western Digital HDD (sata) nvidia 8400 GS 256 MB 1650x1080 Resolution Built in UVC web cam I seem to be having thes
-
Hello all, There is a reference in the following documentation Oracle® Communications Data Model Reference 11g Release 2 (11.2) Part Number E15886-05 about Metadata Population You can use the pkg_get_metadata.sql script to generate the Logical Data M