Logswitch
Log switch is happening very frequently (log switches every two minutes)
Archivelog logs are filling very quickly
The below message found in the alert log:
ORACLE Instance prod1
Thread 1 cannot allocate new log, sequence 5012
All online logs needed archiving
Hi,
Examine the period where you are getting high redos generation and based on that schedule some jobs at a later point in time:
We can the log switch details with the help of the below query:
Redolog switch Datewise and hourwise:
set lines 120;
set pages 999;
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
Best regards,
Rafi.
http://rafioracledba.blogspot.com
Similar Messages
-
Force logswitch every 30 minutes
Hi everybody,
it exists a parameter to force a logswitch e.g. very 30 minutes. Does anybody know this paramter?
ThanksHello,
You may use the parameter ARCHIVE_LAG_TARGET, for instance for 30 min (1800 secondes):
archive_lag_target=1800This link will give you more details:
http://download.oracle.com/docs/cd/E11882_01/server.112/e17110/initparams009.htm#CHDHFDGI
Hope this help.
Best regards,
Jean-Valentin -
What would cause a lot of redo or logswitches in an instance with no users?
Hello all,
I've got a database instance that is set up as a 3 node RAC cluster. I don't think this is RAC related, but the database issues is causing RAC issues.
The instance has been set up, and later will be used to temporarily hold data while another box is being rebuilt. This instance has not had anything imported into it...there are no users connecting to it.
However, it seems to be generating a LOT of redo and archive log traffic. So much so..that overran the db_recovery_file_dest_size of 8GB.
I'd not been monitoring this instance like I said before....nothing is in it, and noone should be connecting to it...but it filled up the 8G..and then, started writing tons of logs to /u01/app/oracle/diag/rdbms/instance1/INSTANCE1_node3/trace/ saying it was full...and eventually filling up the /u01 filesystem..causing things to crash.
I fixed all this by deleting the alert logs to make room. I then did an RMAN job to move the archive logs to tape..and it came back up.
However, just an hour later...I see over 15% of the FRA is filled up again?!?!?
Any ideas on where to start to troubleshoot this? I don't know what is causing this thing to generate redo at all.....
Thanks in advance,
cayenneI've got a database instance that is set up as a 3 node RAC clusterThat would mean 1 database with 3 instances.
Are all 3 instances writing a high volume of redo ? Or is it only one of the three instances ?
When you check for scheduled jobs in DBA_JOBS and DBA_SCHEDULER_JOBS, check all 3 instances.
Hemant K Chitale
http://hemantoracledba.blogspot.com -
SAP GoLive : File System Response Times and Online Redologs design
Hello,
A SAP Going Live Verification session has just been performed on our SAP Production environnement.
SAP ECC6
Oracle 10.2.0.2
Solaris 10
As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
1/
We have been told that our file system read response times "do not meet the standard requirements"
The following datafile has ben considered having a too high average read time per block.
File name -Blocks read - Avg. read time (ms) -Total read time per datafile (ms)
/oracle/PMA/sapdata5/sr3700_10/sr3700.data10 67534 23 1553282
I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
2/
We have been asked to increase the size of the online redo logs which are already quite large (54Mb).
Actually we have BW loading that generates "Chekpoint not comlete" message every night.
I've read in sap note 79341 that :
"The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
Frankly, I have problems undertanding this sentence.
Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
But how is it that frequent chekpoints should decrease the time necessary for recovery ?
Thank you.
Any useful help would be appreciated.Hello
>> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
The recommended ("standard") values are published at the end of sapnote #322896.
23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
>> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
Correct.
>> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
Every dirty block in the buffer cache is written down to the datafiles
The latest SCN is written (updated) into the datafile header
The latest SCN is also written to the controlfiles
If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
Regards
Stefan -
Problem in recovering a database on another machine
Dear All,
I need your help in restoring and recovering a database on another machine. I don't have access to old machine to get logfile or archivelogs. I have taken full backup using RMAN. I have restored the database using rman sucessfully and trying to recover the database. RMAN have't restored the logfile. Database version is 8.1.7.4, OS is Solaris 8. In recovery, its showing the problem in rbs file. I'm showing all command and error. Just give me any idea that how to recover it sucessfully.
SVRMGRL>recover database using backup controlfile until cancel;
ORA-0279: Change 1935345519 generated at 08/19/2005 16:45:50 needed for thread 1
ORA-0289: suggestion: /u07/oraexp/PROD/arch/arch_1_29958.arc
ORA-0280: Change 1935345519 for thread 1 is in sequence #29958
specify log : {RET}...
cancel
ORA-01547: warning: Recover succeeded but open resetlogs would get error below
ORA-01194: file 2 needs more recovery to be consistent
ORA-01110: data file 2: '/u06/oracle/oradata/PROD/rbs01-PROD.dbf'
This RBS file size is 7GB. Is it the reason of problem. Even I tried until time, but its also has same error.
SVRMGRL>recover database using backup controlfile until time '2005-08-16:20:10:00';
ORA-0279: Change 1935345519 generated at 08/19/2005 16:45:50 needed for thread 1
ORA-0289: suggestion: /u07/oraexp/PROD/arch/arch_1_29958.arc
ORA-0280: Change 1935345519 for thread 1 is in sequence #29958
specify log : {RET}...
cancel
ORA-01547: warning: Recover succeeded but open resetlogs would get error below
ORA-01194: file 2 needs more recovery to be consistent
ORA-01110: data file 2: '/u06/oracle/oradata/PROD/rbs01-PROD.dbf'
Why its asking for archive file which generated on 19th, when I'm trying to recover until 16th only.
Regards
RakeshHi.
What is the size of the redo logs in the database. With a small amount of transactions, there might be redo in this particular archived file from two days back. The timestamp of the archivelogfiles does not indicate that all redo in the file is from this day. It might have changes several days back if no logswitch occured inbetween.
So, if I were you, I would provide all archivelogs required to get the datafiles consistent.
As a matter of fact, if archived redo is in the following folder :
/u07/oraexp/PROD/arch/
I would use recover database until time and press return for all archived logs svrmgrl comes up with.
I assume that the database was closed normally prior to taking the offline backup.
Good luck.
Rgds
Kjell Ove -
Alert log is not updated in Oracle 8i
Database is running
Alert log is not updated,not even after manually logswitch
I have created job on this but , it is not running itself while same job is runnig on other Db
DBA_JOBS shows null value for this job
I can run job manually ..then DBA_JOBS shows value
After creating a job ... it shows in invalid objects list
Thanks in advanceOracle_2410 wrote:
Please somebody help me !!!Its 3 month old problem!!!
For first problem "Alert log not getting updated"
Please post the os version and name. And output of below query
show parameter background
ls -lart (or dir) <path shown above>Regards
Anurag -
Hi
datatbas version : 11.0.1.7
Applications 12.1.3
can any one tel us what is happening and what can be done as it is happening in PROD instance
Applications slow down when log switch occurs daily exactly at particular time.
Today it took around 1 hour.
Alter log file info when issue occured.
[oraprod@prod trace]$ top
top - 11:26:54 up 2 days, 19:56, 3 users, load average: 9.74, 9.44, 7.34
Tasks: 530 total, 1 running, 529 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3%us, 0.2%sy, 0.0%ni, 98.8%id, 0.7%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 15972352k total, 15948424k used, 23928k free, 308584k buffers
Swap: 8193108k total, 119924k used, 8073184k free, 12644764k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3209 oraprod 15 0 13000 1336 724 R 1.9 0.0 0:00.02 top
9945 oraprod 16 0 1235m 50m 47m D 1.9 0.3 0:04.07 oracle
11372 oraprod 15 0 1242m 343m 327m S 1.9 2.2 0:04.99 oracle
1 root 15 0 10348 684 576 S 0.0 0.0 0:01.28 init
[oraprod@prod trace]$ ps -ef |grep 9945
oraprod 3220 2023 0 11:27 pts/2 00:00:00 grep 9945
oraprod 9945 1 0 Sep09 ? 00:00:04 ora_lgwr_OBAPROD
[oraprod@prod trace]$ ps -ef |grep 11372
oraprod 3223 2023 0 11:27 pts/2 00:00:00 grep 11372
oraprod 11372 1 0 Sep09 ? 00:00:05 oracleOBAPROD (LOCAL=NO)
Alert log details:
Mon Sep 10 11:04:35 2012
Beginning log switch checkpoint up to RBA [0x1ac.2.10], SCN: 5965243578741
Thread 1 advanced to log sequence 428 (LGWR switch)
Current log# 5 seq# 428 mem# 0: /u02/OBAPROD/db/apps_st/data/log05a.dbf
Current log# 5 seq# 428 mem# 1: /u02/OBAPROD/db/apps_st/data/log05b.dbf
Mon Sep 10 11:05:21 2012
Completed checkpoint up to RBA [0x1ac.2.10], SCN: 5965243578741
Mon Sep 10 11:08:17 2012
Beginning log switch checkpoint up to RBA [0x1ad.2.10], SCN: 5965243623271
Thread 1 advanced to log sequence 429 (LGWR switch)
Current log# 2 seq# 429 mem# 0: /u02/OBAPROD/db/apps_st/data/log02a.dbf
Current log# 2 seq# 429 mem# 1: /u02/OBAPROD/db/apps_st/data/log02b.dbf
Mon Sep 10 11:10:51 2012
Completed checkpoint up to RBA [0x1ad.2.10], SCN: 5965243623271
Mon Sep 10 11:18:25 2012
Thread 1 cannot allocate new log, sequence 430
Private strand flush not complete
Current log# 2 seq# 429 mem# 0: /u02/OBAPROD/db/apps_st/data/log02a.dbf
Current log# 2 seq# 429 mem# 1: /u02/OBAPROD/db/apps_st/data/log02b.dbf
Mon Sep 10 11:21:16 2012
Beginning log switch checkpoint up to RBA [0x1ae.2.10], SCN: 5965243668907
Thread 1 advanced to log sequence 430 (LGWR switch)
Current log# 1 seq# 430 mem# 0: /u02/OBAPROD/db/apps_st/data/log01a.dbf
Current log# 1 seq# 430 mem# 1: /u02/OBAPROD/db/apps_st/data/log01b.dbf
Mon Sep 10 11:25:39 2012
Completed checkpoint up to RBA [0x1ae.2.10], SCN: 5965243668907
Mon Sep 10 11:32:15 2012
Thread 1 cannot allocate new log, sequence 431
Private strand flush not complete
Current log# 1 seq# 430 mem# 0: /u02/OBAPROD/db/apps_st/data/log01a.dbf
Current log# 1 seq# 430 mem# 1: /u02/OBAPROD/db/apps_st/data/log01b.dbf
Mon Sep 10 11:33:46 2012
Beginning log switch checkpoint up to RBA [0x1af.2.10], SCN: 5965245396278
Thread 1 advanced to log sequence 431 (LGWR switch)
Current log# 6 seq# 431 mem# 0: /u02/OBAPROD/db/apps_st/data/log06a.dbf
Current log# 6 seq# 431 mem# 1: /u02/OBAPROD/db/apps_st/data/log06b.dbf
Mon Sep 10 11:38:10 2012
Completed checkpoint up to RBA [0x1af.2.10], SCN: 5965245396278
Mon Sep 10 11:43:16 2012
Beginning log switch checkpoint up to RBA [0x1b0.2.10], SCN: 5965245420329
Thread 1 advanced to log sequence 432 (LGWR switch)
Current log# 7 seq# 432 mem# 0: /u02/OBAPROD/db/apps_st/data/log07a.dbf
Current log# 7 seq# 432 mem# 1: /u02/OBAPROD/db/apps_st/data/log07b.dbf
Mon Sep 10 11:48:34 2012
Completed checkpoint up to RBA [0x1b0.2.10], SCN: 5965245420329
We have a logfile size of 500MB,
Fast_start_mttr_target is 30 sec, archive log mode enabled and also db_writer_processes=10.
Please let us know what is causing the problem.
Thanks....
Edited by: 955685 on Sep 9, 2012 11:16 PMwhats the size of redo log... logswitch happening very frequent in above log...4 logswitch in 6 mins.make a plan to increase redolog size.
Cpu(s): 0.3%us, 0.2%sy, 0.0%ni, 98.8%id, 0.7%wa, 0.0%hi, 0.0%si, 0.0%stmostly idle cpu.
can any one tel us what is happening and what can be done as it is happening in PROD instanceyou have to generate AWR /statspack report to get more detail on this. -
Tuning of Redo logs in data warehouses (dwh)
Hi everybody,
I'm looking for some guidance to configure redo logs in data warehouse environments.
Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.
Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).
Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
Here are the facts:
- Oracle 10g, 32 GB RAM
- 6 GB SGA, 20 GB PGA
- 5 log groups each with 1 Gb log file
- 4 MB Log buffer
- every day ca 150 logswitches (with peaks: some logswitches after 10 seconds)
- some sysstat metrics after one etl load:
Select name, to_char(value, '9G999G999G999G999G999G999') from v$sysstat Where name like 'redo %';
"NAME" "TO_CHAR(VALUE,'9G999G999G999G999G999G999')"
"redo synch writes" " 300.636"
"redo synch time" " 61.421"
"redo blocks read for recovery"" 0"
"redo entries" " 327.090.445"
"redo size" " 159.588.263.420"
"redo buffer allocation retries"" 95.901"
"redo wastage" " 212.996.316"
"redo writer latching time" " 1.101"
"redo writes" " 807.594"
"redo blocks written" " 321.102.116"
"redo write time" " 183.010"
"redo log space requests" " 10.903"
"redo log space wait time" " 28.501"
"redo log switch interrupts" " 0"
"redo ordering marks" " 2.253.328"
"redo subscn max counts" " 4.685.754"
So the questions:
Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?
kind regards,
Mirkouser5341252 wrote:
I'm looking for some guidance to configure redo logs in data warehouse environments.
Of course we are running in noarchive log mode and use direct path inserts (nologging) whereever possible.Why "of course" ? What's your recovery strategy if you wreck the database ?
Nevertheless every etl process (one process per day) produces 150 GB of redo logs. That seems quite a lot compared to the overall data volume (1 TB tables + indexes).This may be an indication that you need to do something to reduce index maintenance during data loading
>
Actually im not sure if there is a tuning problem, but because of the large amount of redo I'm interested in examining it.
For a quick check you might be better off running statspack (or AWR) snapshots across the start and end of batch to get an idea of what work goes on and where the most time goes. A better strategy would be to examine specific jobs in detail, though).
"redo synch time" " 61.421"
"redo log space wait time" " 28.501" Rough guideline - if the redo is slowing you down, then you've lost less than 15 minutes across the board to the log writer. Given the number of processes loading and the elapsed time to load, is this significant ?
"redo buffer allocation retries"" 95.901" This figure tells us how OFTEN we couldn't get space in the log buffer - but not how much time we lost as a result. We also need to see your 'log buffer space' wait time.
Does anybody can see tuning needs? Should the Redo logs be increased or incremented? What about placing redo logs on Solid state disks?Based on the information you've given so far, I don't think anyone should be giving you concrete recommendations on what to do; only suggestions on where to look or what to tell us.
Regards
Jonathan Lewis -
Hi,
Oracle version is 10.2.0.3. I am into the process of creating a physical standby. Both primary as well as standby are kept in same IDC connected by 1Gbps link.
Client doesnt want primary db to get affected in case if, because of any reason, standby goes down or link goes down. In this situation the only choice I have is to configure primary db in maximum performance mode.
In this scenario what is advisable for redo transport. ARCn or LGWR (with SYNC or ASYNC).
My confusion is we can have multiple ARCn processes in Database server but we have only one LGWR, so aint I putting extra load(which may lead to poor primary db performance)on primary db if I choose LGWR for log transfer. At the same time if I go for ARCn, I feel like I am not making use of high bandwidth between two db's.
Kindly have your recommendations/suggestions on this
Regards,
AmitHi Amit,
My suggestion would be LGWR with ASYNC.
In that case the impact to your primary database is lowest.
In this configuration, LGWR writes redo data from log buffer to online redo logs.
Network Server Process, LNS reads the online redo logs and ships the redo streams to Remote File Server Process (RFS) on standby site.
RFS takes the redo stream and updates the Active Standby Redo Log.
Be aware that Standby Redo Log is different than regular Redo Logs.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/standby.htm#i72459
One a Standby Redo Log logswitch occurs, it archives the active Standby Redo Log. Once archived it is ready to be used by MRP (Managed Recovery Process) process which applies it to the standby database.
LGWR (SYNC or ASYNC) configuration also reduces the possibility of loosing data in a case of failover.
Read the following documents to learn more about the redo transport services:
Oracle® Data Guard Concepts and Administration
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_transport.htm
Data Guard Redo Transport & Network Best Practices Oracle Database 10g Release 2
(http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_DataGuardNetworkBestPractices.pdf)
Cheers, -
Error CheckPoint not complete .
Hi Guru's
Could some one advice how to solve this problem which is found in the trace file ..
psvdbsp01 (oracle)[gisp]/db/gisp/dba/bdump$: tail -300 alert_gisp.log
Thread 1 cannot allocate new log, sequence 262393
Checkpoint not complete
Current log# 3 seq# 262392 mem# 0: /db/gisp/redolog/gisp_redo_3a.dbf
Current log# 3 seq# 262392 mem# 1: /db/gisp/mirrlog/gisp_redo_3b.dbf
Tue Mar 12 12:31:19 2013
Thread 1 advanced to log sequence 262393 (LGWR switch)
Current log# 4 seq# 262393 mem# 0: /db/gisp/redolog/gisp_redo_4a.dbf
Current log# 4 seq# 262393 mem# 1: /db/gisp/mirrlog/gisp_redo_4b.dbf
Tue Mar 12 12:42:51 2013
Starting control autobackup
Control autobackup written to SBT_TAPE device
comment 'API Version 2.0,MMS Version 5.4.1.0',
media '5950'
handle 'GISP_auto_cf_bkup_20130312_c-1206537362-20130312-02'
Tue Mar 12 12:44:29 2013
Thread 1 cannot allocate new log, sequence 262394
Checkpoint not complete
Current log# 4 seq# 262393 mem# 0: /db/gisp/redolog/gisp_redo_4a.dbf
Current log# 4 seq# 262393 mem# 1: /db/gisp/mirrlog/gisp_redo_4b.dbf
Tue Mar 12 12:44:29 2013
Thread 1 advanced to log sequence 262394 (LGWR switch)
Current log# 1 seq# 262394 mem# 0: /db/gisp/redolog/gisp_redo_1a.dbf
Current log# 1 seq# 262394 mem# 1: /db/gisp/mirrlog/gisp_redo_1b.dbf
Tue Mar 12 12:44:29 2013
Thread 1 advanced to log sequence 262395 (LGWR switch)
Current log# 2 seq# 262395 mem# 0: /db/gisp/redolog/gisp_redo_2a.dbf
Current log# 2 seq# 262395 mem# 1: /db/gisp/mirrlog/gisp_redo_2b.dbf
Tue Mar 12 12:44:30 2013
Thread 1 advanced to log sequence 262396 (LGWR switch)
Current log# 3 seq# 262396 mem# 0: /db/gisp/redolog/gisp_redo_3a.dbf
Current log# 3 seq# 262396 mem# 1: /db/gisp/mirrlog/gisp_redo_3b.dbf
Thread 1 cannot allocate new log, sequence 262397
Checkpoint not complete
Current log# 3 seq# 262396 mem# 0: /db/gisp/redolog/gisp_redo_3a.dbf
Current log# 3 seq# 262396 mem# 1: /db/gisp/mirrlog/gisp_redo_3b.dbf
Tue Mar 12 12:44:30 2013
Thread 1 advanced to log sequence 262397 (LGWR switch)
Current log# 4 seq# 262397 mem# 0: /db/gisp/redolog/gisp_redo_4a.dbf
Current log# 4 seq# 262397 mem# 1: /db/gisp/mirrlog/gisp_redo_4b.dbf
Thread 1 cannot allocate new log, sequence 262398
Checkpoint not complete
Current log# 4 seq# 262397 mem# 0: /db/gisp/redolog/gisp_redo_4a.dbf
Current log# 4 seq# 262397 mem# 1: /db/gisp/mirrlog/gisp_redo_4b.dbf
Tue Mar 12 12:44:32 2013
Thread 1 advanced to log sequence 262398 (LGWR switch)
Current log# 1 seq# 262398 mem# 0: /db/gisp/redolog/gisp_redo_1a.dbf
Current log# 1 seq# 262398 mem# 1: /db/gisp/mirrlog/gisp_redo_1b.dbf
Thread 1 cannot allocate new log, sequence 262399
Checkpoint not complete
Current log# 1 seq# 262398 mem# 0: /db/gisp/redolog/gisp_redo_1a.dbf
Current log# 1 seq# 262398 mem# 1: /db/gisp/mirrlog/gisp_redo_1b.dbf
Tue Mar 12 12:44:32 2013
Thread 1 advanced to log sequence 262399 (LGWR switch)
Current log# 2 seq# 262399 mem# 0: /db/gisp/redolog/gisp_redo_2a.dbf
Current log# 2 seq# 262399 mem# 1: /db/gisp/mirrlog/gisp_redo_2b.dbf
Thread 1 cannot allocate new log, sequence 262400
Checkpoint not complete
Current log# 2 seq# 262399 mem# 0: /db/gisp/redolog/gisp_redo_2a.dbf
Current log# 2 seq# 262399 mem# 1: /db/gisp/mirrlog/gisp_redo_2b.dbf
And the archive log list show's there is a lag in the sequence number ..
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /db/gisp/archive_logs_rman/
Oldest online log sequence 262432
Next log sequence to archive 262435
Current log sequence 262435
SQL>
Any expert suggestion is highly apprciated .
Thanks
Edited by: 790072 on 13/03/2013 15:18Hi,
10Mb for logfile is very low.
You can check logswitch using this query:
SELECT SUM (DECODE (TO_CHAR (first_time, 'hh24'), '00', 1, 0)) "h0",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '01', 1, 0)) "h1",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '02', 1, 0)) "h2",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '03', 1, 0)) "h3",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '04', 1, 0)) "h4",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '05', 1, 0)) "h5",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '06', 1, 0)) "h6",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '07', 1, 0)) "h7",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '08', 1, 0)) "h8",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '09', 1, 0)) "h9",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '10', 1, 0)) "h10",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '11', 1, 0)) "h11",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '12', 1, 0)) "h12",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '13', 1, 0)) "h13",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '14', 1, 0)) "h14",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '15', 1, 0)) "h15",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '16', 1, 0)) "h16",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '17', 1, 0)) "h17",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '18', 1, 0)) "h18",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '19', 1, 0)) "h19",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '20', 1, 0)) "h20",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '21', 1, 0)) "h21",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '22', 1, 0)) "h22",
SUM (DECODE (TO_CHAR (first_time, 'hh24'), '23', 1, 0)) "h23"
FROM V$log_history
GROUP BY TRUNC (first_time), TO_CHAR (first_time, 'Dy')
ORDER BY 1;Oracle suggest 1 logswitch every 20 minutes, 3 per hour.
Please post the output of my query.
Regards -
ORA-01195: online backup of file 65 needs more recovery to be consistent
Hi,
I was doing a clone by taking the hot backup from prod to dev. The backup was good. And then I created the control file, Then I passed the command
recover database until cancel using backup controlfile;
It asked for the archived logs files. I supplied them until current time.
Then I canceled.
That's when I got this error
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01195: online backup of file 65 needs more recovery to be consistent
ORA-01110: data file 65: '/d10/oradata/dwdev/kt01.dbf'
ORA-01112: media recovery not started
What am I doing wrong? I have not yet passed the command "Alter database open resetlogs"
Should I do more logswitches in prod and pass those files to dev ? Or should I just put the kt tablespace in backup mode and copy the data files?Which set of archivelogs did you copy over to apply ? All the archivelogs from the first ALTER TABLESPACE ... BEGIN BACKUP to the archivelogs subsequent to the last ALTER TABLESPACE ... END BACKUP ?
In the cloned datadabase, what messages do you see in the alert.log on having issued the RECOVER DATABASE command ? Does it complain about the datafiles being fuzzy ? Which archivelogs does it show as having been applied ?
Can you check the log sequence numbers for the duration of the Backup PLUS ArchiveLogs subsequent to the Backup ? -
Hi,
Maybe someone can help me on this.
We have a RAC database in production that (for some) applications need a response of 0,5 seconds. In general that is working.
Outside of production hours we make a weekly full backup and daily incremental backup so that is not bothering us. However as soon as we make an archive backup or a backup of the control file during production hours we have a problem as the application have to wait for more then 0,5 seconds for a respons caused by the event "log file sync" with wait class "Commit".
I already adjusted the script for RMAN so that we use only have 1 files per set and also use one channel. However that didn't work.
Increasing the logbuffer was also not a success.
Increasing Large pool is in our case not an option.
We have 8 redolog groups with each 2 members ( each 250 Mb) and an average during the day of 12 logswitches per hour which is not very alarming. Even during the backup the I/O doesn't show very high activity. The increase of I/O at that moment is minor but (maybe) apperantly enough to cause the "log file sync".
Oracle has no documentation that gives me more possible causes.
Strange thing is that before the first of October we didn't have this problem and there were no changes made.
Has anyone an idea where to look further or did anyone experience a thing like this and was able to solve it?
Kind regardsThe only possible contention I can see is between the log writer and the archiver. 'Backup archivelog' in RMAN means implicitly 'ALTER SYSTEM ARCHIVE LOG CURRENT' (log switch and archiving the online log).
You should alternate redo logs on different disks to minimize the effect of the archiver on the log writer.
Werner -
Hi I have created one standby DB for Oracle 10.1.0.2.0 on same server for the primary Db.As per Oracle docs I have configured everything but when I do logswitch on primary DB and then check the standby DB as...
SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME
FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
it shows no records, that means it is not working.
I have created standby redolog file in the primary DB , as I am using LGWR ASYN mode of redo transform but when I do..
SQL> SELECT GROUP#,THREAD#,SEQUENCE#,ARCHIVED,STATUS FROM V$STANDBY_LOG;
GROUP# THREAD# SEQUENCE# ARC STATUS
4 2 0 YES UNASSIGNED
it show the status unaccessed.
please help me to sort out this problem....
RegardsHi
Thanks to all. Yes I checked the alert logs for both DB, there was some problems due to DB_FILE_NAME_CONVERT ini parameter setting. after I fixed that, I am getting this error in lgwr.trc file
Dump file c:\oracle\product\10.1.0\admin\orcl\bdump\orcl_lgwr_1368.trc
Tue Mar 11 15:39:17 2008
ORACLE V10.1.0.2.0 - Production vsnsta=0
vsnsql=13 vsnxtr=3
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Windows XP Version V5.1 Service Pack 2
CPU : 1 - type 586
Process Affinity: 0x00000000
Memory (A/P) : PH:14M/255M, PG:123M/909M, VA:1749M/2047M
Instance name: orcl
Redo thread mounted by this instance: 1
Oracle process number: 5
Windows thread id: 1368, image: ORACLE.EXE (LGWR)
*** SERVICE NAME:() 2008-03-11 15:39:17.008
*** SESSION ID:(167.1) 2008-03-11 15:39:17.008
LGWR: Archivelog for thread 1 sequence 42 will NOT be compressed
*** 2008-03-11 15:39:17.227 47100 kcrr.c
Making upidhs request to NetServer -1 (ocis 0x05FCE6E0)
ERROR: kcrrnsupidhs nsidx=-1
*** 2008-03-11 15:39:17.258 45203 kcrr.c
Initializing NetServer for dest=stby
Initializing PGA storage for Netserver communication
Allocating a brand new NetServer
Allocated NetServer 0
NetServer 0 has been started.
Subscribing to KSR Channel 0
success!
Indicating recv buffer for KSR Channel 0
success
Waiting for Netserver 0 to initialize itself
*** 2008-03-11 15:39:20.415 45488 kcrr.c
Netserver 0 has been initialized
LGWR performing a channel reset to ignore previous responses
LGWR connecting as publisher to KSR Channel 0
LGWR-NS 0 initialized for destination=stby
*** 2008-03-11 15:39:20.415 45982 kcrr.c
Making upiahm request to NetServer 0
Waiting for NetServer 0 to respond to upiahm
*** 2008-03-11 15:39:24.274 47100 kcrr.c
Making upidhs request to NetServer 0 (ocis 0x05FCE6E0)
NetServer pid:2916
*** 2008-03-11 15:39:24.274 47331 kcrr.c
upidhs done status 1031
*** 2008-03-11 15:39:24.274 46176 kcrr.c
upiahm connect done status is 1031
Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'stby'
*** 2008-03-11 15:39:24.274 47100 kcrr.c
Making upidhs request to NetServer -1 (ocis 0x05FCE6E0)
ERROR: kcrrnsupidhs nsidx=-1
Error 1031 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'stby'
ORA-01031: insufficient privileges
*** 2008-03-11 15:39:24.336 52124 kcrr.c
LGWR: Error 1031 creating archivelog file 'stby'
*** 2008-03-11 15:39:24.383 50571 kcrr.c
kcrrfail: dest:2 err:1031 force:0 blast:1
*** 2008-03-11 15:39:37.680
Old character set id was US7ASCII
*** 2008-03-11 15:39:38.055
New character set id is WE8MSWIN1252
now how to give privilege to the service 'stby'.
please give me some guidance.
regards -
Os thread startup in Top 5 Timed Events AWR
Hi all,
I have Oracle 10.2.0.5 for HP UX
I'm experiencing some slowness. While checking AWR I see the following:
Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 732 28.0
os thread startup 983 665 676 25.4 Concurrency
log file switch (checkpoint incomplete) 1,279 617 482 23.6 Configuration
row cache lock 98,641 577 6 22.1 Concurrency
latch: session allocation 1,377 253 184 9.7 Other
What could be the reason for os thread startup?
too many processes due to parallellism?
I have all tables in NOPARALLEL.
And regarding log file switch (checkpoint incomplete), I changed redo size from 100 MB to 200 MB to reduce the frequency of logswitching.
Thanks in advance.GOOGLE is your friend, but only when you actually use it!
http://karlarao.wordpress.com/2009/04/06/os-thread-startup/ -
Active data Guard but with delay on applying redo logs
Hello,
A question: Active Data Guard gives the possibility of having a physical stand-by database open read-only, for instance for making 'up-to-date' reports.
Data Guard also has the possibility to apply logs with delay, for instance with a delay of some hours. Every commit on primary database is applied to standby after the delay period.
Can this delay be combined with Active Data Guard? This means that standby database is open read-only but with the delay... In case the delay is 2 hours, every report created on standby is ("up-to-date" minus 2 hours)...
Thanks for your answer,
Jan.Hello All,
I answered my own question: indeed it is possible to activate 'real time apply' but with a delay in applying the logs. In EM-11 check the box "enable real time query" and apply. Nect go to thje tab "standby role properties" and change the value after "Apply delay" to for instance 30 minutes. Apply also.
Log on on the primary, create a table, insert some rows and do some logswitches. Log in on the standby and check for the new table. Notice that after 30 minutes the switched logs are applied, even when you change the delay period within the 30 minutes. The information about the delay seems to be in the log, since it is possible to change the delay immediately after the logswitches to 3 minutes, drop the table and do some logswitches again. The logs with the 'create table' statement in it are NOT applied after 3 minutes, but still after 30 minutes. The drop of the table however is done after 3 minutes on the standby. So the 'delay' information is in the log sent over to the Stand-by!
Maybe you are looking for
-
I just wrote a long list of what I've done to check the situation and the info vanished when I looked lower on page. so to reiterate in condensed form- you name it I've done it over the past week including uninstalling and reinstalling from a new dow
-
Can no longer connect to Extreme remotely
Also therefore cannot connect to my usb hard drive. I am on dsl. It worked just fine for about 10 days, then could no longer see the Extreme (or usb drive) from a remote location. Unplugging the Extreme and plugging back in fixed the problem for a fe
-
Recognizing external production costs for production order
Hi Experts, Our company is new to SAP and one issue we have not resolved is whether it is possible to attach costs to an Inspection Plan, and ultimately have these costs flow with the production order to Cost of Goods Sold (COGS). The objective is to
-
Display statistics history of tables and indexes
hi, in display statistics history of tables and indexes i can see only one year old table history, how we can check the table history for more than one year? Thanks, Nithin
-
Listener status shows 2 instances when only 1 exists
Greetings All, I just moved a database to a new server/host and was troubleshooting a problem I had with Enterprise Manager, and ran the command “lsnrctl status”. See results below. C:\Users\xxxxx>lsnrctl LSNRCTL for 64-bit Windows: Version 11.2.0.1.