Default redo log 100mb???????
hi all
my database is runing in archive mode
(database oracle 9i rel 2)
when i issue
SQL> SELECT * FROM V$LOG;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 59 104857600 1 NO CURRENT 3519698 20-FEB-04
2 1 57 104857600 1 YES INACTIVE 3477638 20-FEB-04
3 1 58 104857600 1 YES INACTIVE 3479786 20-FEB-04
its funny default size of redo log [b]is 100MB
every archive log file is created on my database of size 100mb.
1.whats the reason of deafult size of 100mb
2.its my production database and i want to change its size to 1mb
plz give some suggestion to change it.because my boss don''t want any mistake.
if solution is drop and recreate then it give me trouble all time when i do this. so plz give suggestion for change its size
thanks
kuljeet pal singh
The sizes for the Redo Logs members are determined by several issues:
1.- The switching time that you want.
2.- The switching time is determined according what is the amount of time ( This represent data ) that you are disposed to lose in case of you lose all redo members of one Redo Log group.
3.- The switching time is determined according as well to the sizes for Archive Redo Logs to store and handle them in a confortable way.
4.- In some occasions if you have redo members with low size like 1m , that can affect the performance of your database.
5.- when they are too long the database has to wait while they archives redo logs are generated.
If you want to change the size of redo members you have to create new redo log groups and after go removing the groups that you do not want. When you are doing this you have to have at least 2 Redo Log Groups.
Joel Pérez
http://otn.oracle.com/experts
Similar Messages
-
How do I manually archive 1 redo log at a time?
The database is configured in archive mode, but automatic archiving is turned off.
For both Oracle 901 and 920 on Windows, when I try to manually archive a single redo log, the database
archives as many logs as it can up to the log just before the current log:
For example:
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 NO INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 NO INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 NO INACTIVE 425088 28-MAR-05
4 1 17 512000 1 NO INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
6 rows selected.
SQL> alter system archive log next;
System altered.
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 YES INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 YES INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 YES INACTIVE 425088 28-MAR-05
4 1 17 512000 1 YES INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
See - instead of only 1 log being archive, 4 of them were. Oracle behaves the same way if I use the "sequence" option:
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 NO INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 NO INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 NO INACTIVE 425088 28-MAR-05
4 1 17 512000 1 NO INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
6 rows selected.
SQL> alter system archive log next;
System altered.
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 YES INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 YES INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 YES INACTIVE 425088 28-MAR-05
4 1 17 512000 1 YES INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
Is there some default system configuration property telling Oracle to archive as many logs as it can?
Thanks,
DGRThanks Yoann (and Syed Jaffar Jaffar Hussain too),
but I don't have a problem finding the group to archive or executing the alter system archive log command.
My problem is that Oracle doesn't work as I expect it.
This comes from the Oracle 9.2 online doc:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_23a.htm#2053642
"Specify SEQUENCE to manually archive the online redo log file group identified by the log sequence number integer in the specified thread."
This implies that Oracle will only archive the log group identified by the log sequence number I specify in the alter system archive log sequence statement. However, Oracle is archiving almost all of the log groups (see my first post for an example).
This appears to be a bug, unless there is some other system parameter that is configured (by default) to allow Oracle to archive as many log groups as possible.
As to the reason why - it is an application requirement. The Oracle db must be in archive mode, automatic archiving must be disabled and the application must control online redo log archiving.
DGR -
Select from .. as of - using archived redo logs - 10g
Hi,
I was under the impression I could issue a "Select from .. as of" statement back in time if I have the archived redo logs.
I've been searching for a while and cant find an answer.
My undo_management=AUTO, database is 10.2.0.1, the retention is the default of 900 seconds as I've never changed it.
I want to query a table as of 24 hours ago so I have all the archived redo logs from the last 48 hours in the correct directory
When is issue the following query
select * from supplier_codes AS OF TIMESTAMP
TO_TIMESTAMP('2009-08-11 10:01:00', 'YYYY-MM-DD HH24:MI:SS')
I get a snapshot to old ORA-01555 error. I guess that is because my retention is only 900 seconds but I thought the database should query the archive redo logs or have I got that totally wrong?!
My undo tablespace is set to AUTOEXTEND ON and MAXSIZE UNLIMITED so there should be no space issues
Any help would be greatly appreciated!
Thanks
RobertIf you want to go back 24 hours, you need to undo the changes...
See e.g. the app dev guide - fundamentals, chapter on Flashback features: [doc search|http://www.oracle.com/pls/db102/ranked?word=flashback&remark=federated_search]. -
Best way to move redo log from one disk group to another in ASM?
Hi All,
Our db is 10.2.0.3 RAC db. And database servers are window 2003 server.
We need to move more than 50 redo logs (some are regular and some are standby) which are not redundant from one disk group to another. Say we need to move from disk group 1 to 2. Here are the options we are thinking about but not sure which one is the best from easiness and safety prospective.
Thank you very much for your help in advance.
Shirley
Option 1:
1) shutdown immediate
2) copy log files from disk group 1 to disk group2 using RMAN (need to research on this)
3) startup mount
4) alter database rename file ….
5) Open database open
6) delete the redo files from disk group 1 in ASM (how?)
Option 2:
1) create a set of redo log groups in disk group 2
2) drop the redo log groups in disk group 1 when they are inactive and have been archived
3) delete the redo files associated with those dropped groups from disk group 1 (how?) (According to Oracle menu: when you drop the redo log group the operating system files are not deleted and you need to manually delete those files)
Option 3:
1) create a set of redo members in disk group 2 for each redo log group in disk group 1
2) drop the redo log memebers in disk group 1
3) delete the redo files from disk group 1 associated with the dropped membersAbsolutely not, they are not even remotely similar concepts.
OMF: Oracle Managed Files. It is an RDMBS feature, no matter what your storage technology is, Oracle will take care of file naming and location, you only have to define the size of a file, and in the case of a tablespace on an OMF DB Configuration you only need to issue a command similar to this:
CREATE TABLESPACE <TSName>; So the OMF environment creates an autoextensible datafile at the predefined location with 100M by default as its initial size.
On ASM it should only be required to specify '+DGroupName' as the datafile or redo log file argument so it can be fully managed by ASM.
EMC. http://www.emc.com No further commens on it.
~ Madrid
http://hrivera99.blogspot.com -
ORA-00333: redo log read error block
ORA-01033: ORACLE initialization or shutdown in progress ...
/ as sysdba
SQL> shutdown immediate;
SQL> startup nomount;
SQL> alter database mount;
SQL> alter database open;
ORA-00333: redo log read error block 8299 count 8192
SQL> SELECT * FROM V$VERSION;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Product
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> select group#,members,THREAD, STATUS,ARCHIVED,BYTES,FIRST_TIME,FIRST_CHAGE#,SEQUENCE# from v$log;
GROUP# MEMBERS,THREAD,STATUS,ARCHIVED,BYTES,FIRST_TIME,FIRST_CHAGE#,SEQUENCE#
1 1 1 CURRENT NO 52428800 29-FEB-12 1597643 57
2 1 1 INACTIVE NO 52428800 29-FEB-12 1573462 56
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
Wed Feb 29 19:46:38 2012
Recovery of Online Redo Log: Thread 1 Group 1 Seq 56 Reading mem 0
Mem# 0 errs 0: C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_1_7LZYZK8S_.LOG
Wed Feb 29 19:46:40 2012
Completed redo application
Wed Feb 29 19:46:40 2012
Completed crash recovery at
Thread 1: logseq 56, block 6568, scn 1597642
270 data blocks read, 270 data blocks written, 1460 redo blocks read
Wed Feb 29 19:46:43 2012
Thread 1 advanced to log sequence 57
Thread 1 opened at log sequence 57
Current log# 2 seq# 57 mem# 0: C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_7LZYZL5V_.LOG
Successful open of redo thread 1
Wed Feb 29 19:46:43 2012
SMON: enabling cache recovery
Wed Feb 29 19:46:55 2012
Successfully onlined Undo Tablespace 1.
Wed Feb 29 19:46:55 2012
SMON: enabling tx recovery
Wed Feb 29 19:46:56 2012
Database Characterset is AL32UTF8
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=19, OS id=3024
Wed Feb 29 19:47:09 2012
Completed: alter database open
Wed Feb 29 19:47:14 2012
db_recovery_file_dest_size of 10240 MB is 0.98% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Feb 29 20:33:30 2012
MMNL absent for 1537 secs; Foregrounds taking over
Wed Feb 29 20:33:31 2012
MMNL absent for 1540 secs; Foregrounds taking over
Wed Feb 29 20:33:31 2012
MMNL absent for 1540 secs; Foregrounds taking over
MMNL absent for 1540 secs; Foregrounds taking over
Wed Feb 29 20:33:32 2012
MMNL absent for 1540 secs; Foregrounds taking over
Wed Feb 29 20:33:33 2012
MMNL absent for 1540 secs; Foregrounds taking over
Wed Feb 29 21:45:24 2012
MMNL absent for 4318 secs; Foregrounds taking over
MMNL absent for 4318 secs; Foregrounds taking over
MMNL absent for 4322 secs; Foregrounds taking over
Dump file c:\oraclexe\app\oracle\admin\xe\bdump\alert_xe.log
Wed Feb 29 22:30:01 2012
ORACLE V10.2.0.1.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Windows XP Version V5.1 Service Pack 3, v.3244
CPU : 2 - type 586, 2 Physical Cores
Process Affinity : 0x00000000
Memory (Avail/Total): Ph:3097M/3546M, Ph+PgF:5143M/5429M, VA:1943M/2047M
Wed Feb 29 22:30:01 2012
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =10
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.1.0.
System parameters with non-default values:
sessions = 49
__shared_pool_size = 201326592
__large_pool_size = 8388608
__java_pool_size = 4194304
__streams_pool_size = 0
spfile = C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DBS\SPFILEXE.ORA
sga_target = 805306368
control_files = C:\ORACLEXE\ORADATA\XE\CONTROL.DBF
__db_cache_size = 587202560
compatible = 10.2.0.1.0
db_recovery_file_dest = C:\oraclexe\app\oracle\flash_recovery_area
db_recovery_file_dest_size= 10737418240
undo_management = AUTO
undo_tablespace = UNDO
remote_login_passwordfile= EXCLUSIVE
dispatchers = (PROTOCOL=TCP) (SERVICE=XEXDB)
shared_servers = 4
local_listener = (ADDRESS=(PROTOCOL=TCP)(HOST=winsp3ue)(PORT=1522))
job_queue_processes = 4
audit_file_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\ADUMP
background_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\BDUMP
user_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\UDUMP
core_dump_dest = C:\ORACLEXE\APP\ORACLE\ADMIN\XE\CDUMP
db_name = XE
open_cursors = 300
os_authent_prefix =
pga_aggregate_target = 268435456
PMON started with pid=2, OS id=2176
PSP0 started with pid=3, OS id=2204
MMAN started with pid=4, OS id=2208
DBW0 started with pid=5, OS id=2212
LGWR started with pid=6, OS id=2220
CKPT started with pid=7, OS id=2240
SMON started with pid=8, OS id=2460
RECO started with pid=9, OS id=2464
CJQ0 started with pid=10, OS id=2480
MMON started with pid=11, OS id=2484
Wed Feb 29 22:30:02 2012
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
MMNL started with pid=12, OS id=2492
Wed Feb 29 22:30:02 2012
starting up 4 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
Wed Feb 29 22:30:02 2012
alter database mount exclusive
Wed Feb 29 22:30:06 2012
Setting recovery target incarnation to 2
Wed Feb 29 22:30:06 2012
Successful mount of redo thread 1, with mount id 2657657770
Wed Feb 29 22:30:06 2012
Database mounted in Exclusive Mode
Completed: alter database mount exclusive
Wed Feb 29 22:30:07 2012
alter database open
Wed Feb 29 22:30:07 2012
Beginning crash recovery of 1 threads
Wed Feb 29 22:30:07 2012
Started redo scan
Wed Feb 29 22:30:15 2012
Errors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_2544.trc:
ORA-00333: redo log read error block 10347 count 6144
ORA-00312: online log 2 thread 1: 'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_7LZYZL5V_.LOG'
ORA-27070: async read/write failed
OSD-04016: Error queuing an asynchronous I/O request.
O/S-Error: (OS 23) Data error (cyclic redundancy check).
Waiting for Help
RegardsErrors in file c:\oraclexe\app\oracle\admin\xe\udump\xe_ora_2544.trc:
ORA-00333: redo log read error block 10347 count 6144
ORA-00312: online log 2 thread 1: 'C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_7LZYZL5V_.LOG'
ORA-27070: async read/write failed
OSD-04016: Error queuing an asynchronous I/O request.
O/S-Error: (OS 23) Data error (cyclic redundancy check).Might your redo log file is corrupted or not exist, check physically. -> C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_7LZYZL5V_.LOG
is it archivelog mode?
perform fake recovery and open resetlogs. -
Standby Redo Log Files ?
Hi Everyone,
Today after reading two different sources for Standby Protection Modes i found myself puzzled and stuck. One of the article from Burleson.com says 'Oracle supports the standby redo logs on a logical standby database and can now be configured in maximum data protection modes such as MAXIMUM PROTECTION ...'
On the other hand on some of the blogs and other resources to read, i found it something opposite to what Burleson Consulting posted on their website.
[http://4.bp.blogspot.com/-t0G_-xc8EAs/Tpvx9w2t8oI/AAAAAAAAAN4/Jw3U9s89Wtk/s1600/final.JPG|http://4.bp.blogspot.com/-t0G_-xc8EAs/Tpvx9w2t8oI/AAAAAAAAAN4/Jw3U9s89Wtk/s1600/final.JPG]
or
Blog from Jeff Hunter
[http://www.idevelopment.info/data/Oracle/DBA_tips/Data_Guard/DG_3.shtml|http://www.idevelopment.info/data/Oracle/DBA_tips/Data_Guard/DG_3.shtml]
Minimum Requirements for Data Protection Modes
Maximum Protection Maximum Availability Maximum Performance
Redo Archival Process LGWR LGWR LGWR or ARCH
Network Transmission Mode SYNC SYNC ASYNC when using LGWR process. Not applicable when using ARCH process.
Disk Write Option AFFIRM AFFIRM NOAFFIRM
Standby Redo Logs Required? Yes Required for physical standby databases only (Standby redo logs are not supported for logical standby databases.) Required for physical standby databases using the LGWR process.
Database Type Physical only Physical and Logical Physical and Logical
Please help me to find true between the two.
Or please provide any doc to read.
Thanks
Prashant DixitMaximum Protection Maximum Availability Maximum PerformanceDepends on Business requirement, By default Performance[most of the clients]
Redo Archival Process ? ? ?LGWR recommended in Max performance
Network Transmission Mode ? ? ?Depends. If max performance asynchronous
Disk Write Option ? ? ?Not clear
Standby Redo Logs Required? ? ? ?If real time apply - YES
Database Type ? ? ?not clear,
Assuming physical or logical? --Depends on requirement , Preferably Physical. -
DB Cache Full or Redo Log Full?
Is there any way that Oracle can write to datafiles in the middle of a transaction?
Iam reading, processing and writing very large sized lobs which gives error that "no free buffers available in buffer pool".
When in lobs, a lob is not written until the whole tranaction finishes - but in my case the lob size is large than the size of the data buffer cache.
The error is "ORA-00379: no free buffers available in buffer pool DEFAULT for block size 8K"
Exact question I would like to know now is that which buffer is full; data_buffer_cache or the redo log buffer?
If data_buffer cache, then is there a mechanism which allows to write data to dtafiles in the middle of a transaction as i have to do processing with lobs - which are 3 to 4 times the size of the db cache size.
I am referring to the same problem outlined in an earlier thread.
ThanksIs there any way that Oracle can write to datafiles
in the middle of a transaction?
r.- Oracle writes to the datafiles only commited transactions according to some elements
Iam reading, processing and writing very large sized
lobs which gives error that "no free buffers
available in buffer pool".
r.- You have to increase the size of the buffer Pool
When in lobs, a lob is not written until the whole
tranaction finishes - but in my case the lob size is
large than the size of the data buffer cache.
The error is "ORA-00379: no free buffers available in
buffer pool DEFAULT for block size 8K"
Exact question I would like to know now is that which
buffer is full; data_buffer_cache or the redo log
buffer?
data_buffer_cache. In what version you are ?
If data_buffer cache, then is there a mechanism which
allows to write data to dtafiles in the middle of a
transaction as i have to do processing with lobs -
which are 3 to 4 times the size of the db cache
size.
r.- Oracle does not write to the datafiles in that way
I am referring to the same problem outlined in an
earlier thread.
Thanks Joel Pérez
http://www.oracle.com/technology/experts -
[DG Physical] ORA-00368: checksum error in redo log block
Hi all,
I'm building a DR solution with 1 primary & 2 DR site (Physical).
All DBs use Oracle 10.2.0.3.0 on Solaris 64bit.
The first one ran fine for some days (6), then I installed the 2nd. After restoring the DB (DUPLICATE TARGET DATABASE FOR STANDBY) & ready to apply redo. The DB fetched missing arc gaps & I got the following error:
==================
Media Recovery Log /global/u04/recovery/billhcm/archive/2_32544_653998293.dbf
Errors with log /global/u04/recovery/billhcm/archive/2_32544_653998293.dbf
MRP0: Detected read corruption! Retry recovery once log is re-fetched...
Wed Jan 27 21:46:25 2010
Errors in file /u01/oracle/admin/billhcm/bdump/billhcm1_mrp0_12606.trc:
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 1175553 change 8236247256146 time 01/27/2010 18:33:51
ORA-00334: archived log: '/global/u04/recovery/billhcm/archive/1_47258_653998293.dbf'
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 8236247255373
===================
I see that may be RFS get the file incorrectly so I ftp to get this file & continue the apply, it pass. Comparison the RFS file & ftp is difference. At that time, I think that something wrong with the RFS because the content of arc is not right. (I used BACKUP VALIDATE ARCHIVELOG SEQUENCE BETWEEN N1 AND N2 THREAD X to check all arcs the RFS fetched, there was corrupt in every file);
I restore the DR DB again & apply incremental backup from the primary, now it run well. I don't know what's happening as I did the same procedures for all DR DB.
Yesterday night, I have to stop & restart DR site 1. Today, I check and it got the same error as the 2nd site, with corrupted redo. I try to delete the arcs, & let RFS to reget it, but the files were corrupt too.
If this continue to happen with the 2nd site again, that'll be a big problem.
The DR site 1 & Primary is linked by a GB switch, site 2 by a 155Mbps connection (far enough for my db load at about 1.5MB/s avg apply rate).
I seach Oracle support (metalink) but no luck, there is a case but it mentions max_connection>1 (mine is default =1)
Can someone show me how to troubleshooting/debug/trace this problem.
That would be a great help!
Thank you very much.This (Replication) is the wrong forum for your posting.
Please post to the "Database - General" forum at
General Database Discussions
But, first, log an SR with Oracle Support.
Hemant K Chitale -
Sizing the redo log files using optimal_logfile_size view.
Regards
I have a specific question regarding logfile size. I have deployed a test database and i was exploring certain aspects with regards to selecting optimal size of redo logs for performance tuning using optimal_logfile_size view from v$instance_recovery. My main goal is to reduce the redo bytes required for instance recovery. Currently i have not been able to optimize the redo log file size. Here are the steps i followed:-
In order to use the advisory from v$instance_recovery i had to set fast_start_mttr_target parameter which is by default not set so i did these steps:-
1)SQL> sho parameter fast_start_mttr_target;
NAME TYPE VALUE
fast_start_mttr_target integer 0
2) Setting the fast_start_mttr_target requires nullifying following deferred parameters :-
SQL> show parameter log_checkpoint;
NAME TYPE VALUE
log_checkpoint_interval integer 0
log_checkpoint_timeout integer 1800
log_checkpoints_to_alert boolean FALSE
SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'log_checkpoint_timeout';
ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
FALSE IMMEDIATE TRUE FALSE
SQL> alter system set log_checkpoint_timeout=0 scope=both;
System altered.
SQL> show parameter log_checkpoint_timeout;
NAME TYPE VALUE
log_checkpoint_timeout integer 0
3) Now setting fast_start_mttr_target
SQL> select ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE,ISMODIFIED from v$parameter where name like'fast_start_mttr_target';
ISSES_MODIFIABL ISSYS_MODIFIABLE ISINSTANCE_MODI ISMODIFIED
FALSE IMMEDIATE TRUE FALSE
Setting the fast_mttr_target to 1200 = 20 minutes of checkpoint switching according to Oracle recommendation
Querying the v$instance_recovery view
4) SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
276 165888 *93* 59 361 16040
Here Target Mttr was 93 so i set the fast_mttr_target to 120
SQL> alter system set fast_start_mttr_target=120 scope=both;
System altered.
Now the logfile size suggested by v$instance_recovery is 290 Mb
SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
59 165888 93 59 290 16080
After altering the logfile size to 290 as show below by v$log view :-
SQL> select GROUP#,THREAD#,SEQUENCE#,BYTES from v$log;
GROUP# THREAD# SEQUENCE# BYTES
1 1 24 304087040
2 1 0 304087040
3 1 0 304087040
4 1 0 304087040
5 ) After altering the size i have observed the anomaly as redo log blocks to be applied for recovery has increased from *59 to 696* also now v$instance_recovery view is now suggesting the logfile size of *276 mb*. Have i misunderstood something
SQL> select ACTUAL_REDO_BLKS,TARGET_REDO_BLKS,TARGET_MTTR,ESTIMATED_MTTR, OPTIMAL_LOGFILE_SIZE,CKPT_BLOCK_WRITES from v$instance_recovery;
ACTUAL_REDO_BLKS TARGET_REDO_BLKS TARGET_MTTR ESTIMATED_MTTR OPTIMAL_LOGFILE_SIZE CKPT_BLOCK_WRITES
*696* 646947 120 59 *276* 18474
Please clarify the above output i am unable to optimize the logfile size and have not been able to achieve the goal of reducing the redo log blocks to be applied for recovery, any help is appreciated in this regard.sunny_123 wrote:
Sir oracle says that fast_start_mttr target can be set to 3600 = 1hour. As suggested by following oracle document
http://docs.oracle.com/cd/B10500_01/server.920/a96533/instreco.htm
I set mine value to 1200 = 20 minutes. Later i adjusted it to 120=2 minutes as Target_mttr suggested it to be around 100 (if fast_mttr_target value is too high or too low effective value is contained in target_mttr of v$instance_recovery)Just to add, you are reading the documentation of 9.2 and a lot has changed since then. For example, in 9.2 the parameter FSMTTR was introduced and explicitly required to be set and monitored by the DBA for teh additional checkpoint writes which might get caused by it. Since 10g onwards this parameter has been made automatically maintained by Oracle. Also it's been long that 9i has been desupported followed by 10g so it's better that you start reading the latest documentation of 11g and if not that, at least of 10.2.
Aman.... -
Where are BLOB Files stored when using redo log files.
I am using Archive Log Mode for my backups. I was wondering if Blob files get stored in the redo log files or if they are archived somewhere else?
Rob.BLOB are just columns of some tables and are stored in related tablespace table by default in their own segments. Changes to BLOB columns are also stored in redo log more or less like any other column.
See an short example for default LOB storage in Re: CLOB Datatype [About Space allocation]
Edited by: P. Forstmann on 27 avr. 2011 07:20 -
Hi -
I have a few questions regarding redo log groups and naming conventions I was hoping someone could address or point me to some docs.
I am multiplexing my control file and redo logs across HDDs for an XE installation.
The original logs created at install have the naming form of:
O1_MF_1_462H1GK7_.LOG.
1. What is behind the naming scheme (specifically the _462H1GK7_ section)?
2. Is there a generally recognized naming scheme for adding new group members in XE?
3. I noticed that with any XE install I have done, the redo logs groups default to Group 1 and Group 3, with no Group 2 to be found. Is this normal/required? If not, is it best to add group 2 and then remove group 3? I'm not sure if has much bearing here, but the 10gR2 docs state that skipping group numbers will consume space in the control files.
Thanks in advance for any assistance,
ScottThe odd-looking filename is from using Oracle Managed Files (OMF). You can override the naming scheme or create your own groups and members. Very common to include "redo" in the file name along with group and member identifiers. An example would be:
<path>/redo01a.log
<path>/redo01b.log
<path>/redo02a.log
etc.
You can see group 01 has two members, a and b. Can also include the SID in the file name as well, but that can be identified via the path. 462H1GK7 is a unique identifier generated by Oracle. It has no meaning.
I don't know about XE not creating a group 2. Were there group 2 file(s) left over from a previous install (although the OMF probably would have ignored the existing files)? If creating the files manually, you can use "reuse" to use existing files. -
Cold backup with online redo logs
I am working on 10G in AIX for a single instance
It is just a general db backup & restore question, but I have something confused.
I am going to perform a cold backup with my ARCHIVELOG database.
No wonder why I perform a cold backup because it is a testing database which can suffer from data lost and down time during backup.
I read some guides. They all mentioned to backup all the datafiles and control files.
During the restoration, I have to copy all the backed up datafiles and control files to the default location.
Then Startup mount;
The last step before open the database is recover database until cancel;
For the acknowledgement, I have to do the command of recover database, because the online redo logs were not backed up, thus we have to recover it in order to reset the redo logs.
For my question,Would I be able to skip the command of recover database, then directly startup the database if I have backed up the online redo logs and copy the default location during the restoration?
However, I read many documents which mention that it is not suggested to backup the online redo logs. Is it just the case which ONLY applied in hot backup? Do you all think that for my case, cold backup for online redo logs is recommended?
Thanks alljgarry wrote:
Edit: And never forget, those test databases are some developers production.Absolutely true according to my experience. Loosing the work of a payed developer is just as bad as loosing the work of a production system and may even be worse because it may not be possible to re-enter missing data into the system.
I think a cold backup is only suitable on special occasions, for instance, to relocate/copy the database to a different storage media, or if the database doesn't change or if loosing changes is absolutely irrelevant. Otherwise, put the database into archivelog mode and do a hot backup. After that you will also have alternative options which can make the restore and recovery of the database very easy and efficient, like flashback database, etc. but it will take substantial additional disk space. -
DB version:11.2
Platform : Solaris 10
We create RAC DBs manually. Below is a log of the DB creation from Node1 . Instance in Node2 is not yet created (only binary is installed in Node2).
SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup nomount pfile=/u03/oracle/11.2/db_1/dbs/initnehprd1.ora
ORACLE instance started.
Total System Global Area 1252643278 bytes
Fixed Size 2219208 bytes
Variable Size 771752760 bytes
Database Buffers 469762048 bytes
Redo Buffers 8929280 bytes
SQL> CREATE DATABASE nehprd MAXINSTANCES 8 MAXLOGFILES 16 MAXLOGMEMBERS 4 MAXDATAFILES 1024
2 CHARACTER SET AL32UTF8 NATIONAL CHARACTER SET AL16UTF16
3 DATAFILE '+DG_DATA01/nehprd/nehprd_system01.dbf' SIZE 1000m EXTENT MANAGEMENT LOCAL
4 SYSAUX DATAFILE '+DG_DATA01/nehprd/nehprd_sysaux01.dbf' SIZE 600m
5 DEFAULT TEMPORARY TABLESPACE temp
6 TEMPFILE '+DG_DATA01/nehprd/nehprd_temp01.dbf' SIZE 2000m EXTENT MANAGEMENT LOCAL UNIFORM SIZE 5m
7 UNDO TABLESPACE undotbs11 DATAFILE '+DG_DATA01/nehprd/nehprd_undotbs1101.dbf' SIZE 700m
8 LOGFILE
9 GROUP 1 ('+DG_DATA01/nehprd/nehprd_log01.dbf') SIZE 150m,
10 GROUP 2 ('+DG_DATA01/nehprd/nehprd_log02.dbf') SIZE 150m,
11 GROUP 3 ('+DG_DATA01/nehprd/nehprd_log03.dbf') SIZE 150m
12 /
Database created.
Elapsed: 00:00:18.95
SQL> CREATE UNDO TABLESPACE undotbs12 DATAFILE '+DG_DATA01/nehprd/nehprd_undotbs1201.dbf' SIZE 700m;
Tablespace created.
Elapsed: 00:00:01.30
SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 4 '+DG_DATA01/nehprd/nehprd_log04.dbf' SIZE 150m;
Database altered.
Elapsed: 00:00:00.25
SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 5 '+DG_DATA01/nehprd/nehprd_log05.dbf' SIZE 150m;
Database altered.
Elapsed: 00:00:00.43
SQL> ALTER DATABASE ADD LOGFILE thread 2 GROUP 6 '+DG_DATA01/nehprd/nehprd_log06.dbf' SIZE 150m;
Database altered.But after the above activity, the following log files are created in the DB.
6 log groups for each Instance and they all are on the same location +DG_DATA01/nehprd !
INST_ID GROUP# STATUS TYPE MEMBER IS_
1 1 ONLINE +DG_DATA01/nehprd/nehprd_log01.dbf NO
1 2 ONLINE +DG_DATA01/nehprd/nehprd_log02.dbf NO
1 3 ONLINE +DG_DATA01/nehprd/nehprd_log03.dbf NO
1 4 ONLINE +DG_DATA01/nehprd/nehprd_log04.dbf NO
1 5 ONLINE +DG_DATA01/nehprd/nehprd_log05.dbf NO
1 6 ONLINE +DG_DATA01/nehprd/nehprd_log06.dbf NO
2 1 ONLINE +DG_DATA01/nehprd/nehprd_log01.dbf NO
2 2 ONLINE +DG_DATA01/nehprd/nehprd_log02.dbf NO
2 3 ONLINE +DG_DATA01/nehprd/nehprd_log03.dbf NO
2 4 ONLINE +DG_DATA01/nehprd/nehprd_log04.dbf NO
2 5 ONLINE +DG_DATA01/nehprd/nehprd_log05.dbf NO
2 6 ONLINE +DG_DATA01/nehprd/nehprd_log06.dbf NO How was redo log group 4,5,6 created for thread 1 and how was redo log group 1,2,3 created for thread 2 ?Hi,
To make things worse, when you query v$logfile , It will show 6 redo logfiles belonging to 6 redo groups for each instance.The fact that it shows all groups of redo does not mean it belongs to that instance. Try to query v$database or v$datafile, this means that database/datafiles belongs to only one instance, of course not.
Isn't this a bit of a bug ?Of course not. It's concept.
To understand it you need understand the difference between instance and database. An database (i.e files) can be opened by many instances.
An Oracle database server consists of a database and at least one database instance (commonly referred to as simply an instance). Because an instance and a database are so closely connected, the term Oracle database is sometimes used to refer to both instance and database. In the strictest sense the terms have the following meanings:
Database
A database is a set of files, located on disk, that store data. These files can exist independently of a database instance.
Database instance
An instance is a set of memory structures that manage database files. The instance consists of a shared memory area, called the system global area (SGA), and a set of background processes. An instance can exist independently of database files.
Database: (v$database)
CONTROLFILE (v$controlfile)
DATAFILE (v$datafile)
ONLINELOG (v$logfile,v$log)
ARCHIVELOG (v$archivelog)
SPFILE
These views above will show the same values in either instance, because if the file (database) is changed it is modified in all instances. That's means you not need use gv$ because the information are the same in all instances, also you not need get info connecting in each instance querying theses v$ because the inf are the same independent of the instance
Instances: (v$instance)
PARAMETERS (v$parameter)
MEMORY STRUCTURE (e.g v$session)
The view v$session will show information about sessions from that instance only. In RAC each instance have you own info about session so you will need query gv$session because it get information about session from others instances.
The fact that each instance assign its own REDO/UNDO not mean they are part of the instances, REDO/UNDO are part of Database. They can be write by assigned instance and read by all instance (just it)
It's not a bug when you query v$datafile, v$logfile, v$controlfile in all instances you will get same result, because it's the DATABASE. (An database (i.e files) can be opened by many instances).
Levi Pereira -
Hi
I want to take Redo log backup on disk. what changes i have to made in init<SID>.sao file.
Regards
VikramHi Vikram,
If you use brtools then even without parameter change in init<SID>.sap also you can take backup.You have to provide media as disk at the time of backup then the backup will be stored in the default ....../<SID>/sapbackup.
You you want to take backup to some other place then you have to change the parameter
archive_copy_dir=<dirctory where you want to take the backup>
But if you want to take using db13 then you have to change the parameter for backup media to disk
backup_dev_type = disk
archive_copy_dir = /oracle/<SID>/sapbackup
Regards
Ashok Dalai
Edited by: Ashok Dalai on Aug 3, 2009 8:35 AM -
Redo log backup failing with BR253E errno 2:
Hi all,
I am able to take online as well as offline backup through sapdba , but unfortunately from last 7 days my redo log backup is failing after online backup is complete with below mentioned error. I also tried to start redo log backup seperately but failed as below...
BR002I BRARCHIVE 6.20 (18)
BR006I Start of offline redo log processing: adwpawsw.sve 2007-11-13 21.33.22
BR280I Time stamp 2007-11-13 21.33.25
BR008I Offline redo log processing for database instance: PRD
BR009I BRARCHIVE action ID: adwpawsw
BR010I BRARCHIVE function ID: sve
BR048I Archive function: save
BR011I 195 offline redo log files found for processing, total size 9469.831 MB
BR112I Files will not be compressed
BR130I Backup device type: disk
BR106I Files will be saved on disk in directory: F:\oracle\PRD\sapbackup
BR126I Unattended mode active - no operator confirmation required
BR202I Saving init_ora
BR203I to F:\oracle\PRD\sapbackup\PRD ...
BR202I Saving o:\orcle\prd\920\DATABASE\initPRD.sap
BR203I to F:\oracle\PRD\sapbackup\PRD ...
BR280I Time stamp 2007-11-13 21.33.26
BR198I Profiles saved successfully
BR252E Function fopen() failed for 'F:\oracle\PRD\sapbackup\.PRD/oraarch/PRDarch
ARC10479.001' at location arch_process-4
BR253E errno 2: No such file or directory
BR121E Processing log file F:\oracle\PRD\sapbackup\.PRD/oraarch/PRDarchARC10479.
001 failed
BR016I 0 offline redo log files processed, total size 0.000 MB
BR007I End of offline redo log processing: adwpawsw.sve 2007-11-13 21.33.26
BR280I Time stamp 2007-11-13 21.33.26
BR005I BRARCHIVE terminated with errors
End of output from program 'BRARCHIVE' -
SAPDBA: Execution of BRARCHIVE failed.
(2007-11-13 21.33.27)
Press <return> to continue ...
SAPDBA: Last line from BRARCHIVE summary log:
'#* PRD disk adwpawsw sve 2007-11-13 21.33.22 2007-11-13 21.33.26
5 ........... -
c 6.20 (18)'
Press <return> to continue ...
Appreciate your quick replies.
Best Regards,
AjitRUnfortunately backlogs point to directory
<b>'F:\oracle\PRD\sapbackup\.PRD</b>
But i have my logfiles getting archived at D:\oracle\PRD\oraarch
and they are supposed to be backed up at F:\oracle\PRD\sapbackup
Below is my init.sap file for your ref, i cant find where this <b>.PRD</b> folder came from
backup mode [all | all_data | full | incr | sap_dir | ora_dir
| <tablespace_name> | <file_id> | <file_id1>-<file_id2>
| <generic_path> | (<object_list>)]
default: all
backup_mode = all
restore mode [all | all_data | full | incr | incr_only | incr_full
| <tablespace_name> | <file_id> | <file_id1>-<file_id2>
| <generic_path> | (<object_list>)]
redirection with '=' is not supported here - use option '-m' instead
default: all
restore_mode = all
backup type [offline | offline_force | offline_standby | offline_split
| offline_stop | online | online_cons | online_split]
default: offline
backup_type = offline
backup device type
[tape | tape_auto | tape_box | pipe | pipe_auto | pipe_box | disk
| disk_copy | disk_standby | stage | stage_copy | stage_standby
| util_file | util_file_online | rman_util | rman_disk | rman_stage
| rman_prep]
default: tape
#backup_dev_type = tape
backup_dev_type = disk
backup root directory [<path_name> | (<path_name_list>)]
default: %SAPDATA_HOME%\sapbackup
#backup_root_dir = D:\oracle\PRD\sapbackup
backup_root_dir = F:\oracle\PRD\sapbackup
stage root directory [<path_name> | (<path_name_list>)]
default: value of the backup_root_dir parameter
#stage_root_dir = D:\oracle\PRD/sapbackup
stage_root_dir = F:\oracle\PRD\sapbackup
compression flag [yes | no | hardware | only]
default: no
compress = no
compress command
first $-character is replaced by the source file name
second $-character is replaced by the target file name
<target_file_name> = <source_file_name>.Z
for compress command the -c option must be set
recommended setting for brbackup -k only run:
"%SAPEXE%\mkszip -l 0 -c $ > $"
no default
compress_cmd = "@SAPEXE@\mkszip -c $ > $"
uncompress command
first $-character is replaced by the source file name
second $-character is replaced by the target file name
<source_file_name> = <target_file_name>.Z
for uncompress command the -c option must be set
no default
uncompress_cmd = "@SAPEXE@\uncompress -c $ > $"
directory for compression [<path_name> | (<path_name_list>)]
default: value of the backup_root_dir parameter
#compress_dir = D:\oracle\PRD/sapreorg
compress_dir = F:\oracle\PRD\sapreorg
brarchive function [save | second_copy | double_save | save_delete
| second_copy_delete | double_save_delete | copy_save
| copy_delete_save | delete_saved | delete_copied]
default: save
archive_function = save
directory for archive log copies to disk
default: first value of the backup_root_dir parameter
#archive_copy_dir = D:\oracle\PRD/sapbackup
archive_copy_dir = F:\oracle\PRD\sapbackup
directory for archive log copies to stage
should contain <SID> subdirectory
default: first value of the stage_root_dir parameter
#archive_stage_dir = D:\oracle\PRD/sapbackup
archive_stage_dir = f:\oracle\PRD\sapbackup
new database home directory for disk_copy | disk_standby
no default
new_db_home = X:\oracle\C11
stage database home directory for stage_copy | stage_standby
default: value Cof the new_db_home parameter
stage_db_home = /oracle/C11
original database home directory for split mirror disk backup
no default
orig_db_home = /oracle/C11
Maybe you are looking for
-
Can I use my lightroom 5 CD on more than one computer? Toshiba to my new Mac Book Pro?
I have Lightroom 5 CD that I've used for 2 years in my Toshiba laptop. I have a new Mac Book Pro and was wondering if I purchase an external CD for it, can I use my software on my new computer. I wasn't sure if it can be used on more than one compute
-
Accessing Human Task Payload from ADF Task Flow
Hi Using jDeveloper 11g TP4 SOA... The scenario is this: 1. We have created a Human Task which has parameters accountId and accountType in the payload. 2. We have then created a Task Flow based on that human task. 3. Inside the web folder with task f
-
What happened to the attachment icon in Mail?
I just upgraded to Lion, and I noticed that there wasn't an icon for mail attachments. I looked into preferences and no option there. I have to go into the menu to attach a file. What happened? That seems to be a pretty basic function.
-
iTunes will not download no matter what i do.
-
Communication channel not working -- No Adapter Registered for this channel
Hi Experts, I see the status of one of my Communication Channels (In RWB Communication Channel Monitoring) as "No Adapter Registered for this channel". I saw one of the threads related to this but could not fins a suitable reason for it as our issue