Log sequence difference among RAC Instances
RDBMS Version : 11.2
Platform : Solaris 10
In our RAC DB, we have 6 redo logs.
SQL> select * from v$logfile order by 1,2;
GROUP# STATUS TYPE MEMBER IS_
1 ONLINE +ORCL_ARCH01/ORCL_ARCH/orcl_log01.dbf NO
2 ONLINE +ORCL_ARCH01/ORCL_ARCH/orcl_log02.dbf NO
3 ONLINE +ORCL_ARCH01/ORCL_ARCH/orcl_log03.dbf NO
4 ONLINE +ORCL_ARCH01/ORCL_ARCH/orcl_log04.dbf NO
5 ONLINE +ORCL_ARCH01/ORCL_ARCH/orcl_log05.dbf NO
6 ONLINE +ORCL_ARCH01/ORCL_ARCH/orcl_log06.dbf NO
6 rows selected.
Logically or shall I say 'internally' , 3 redo logs are allocated per thread. But they all belong to one database.
SQL> select group#,thread#,members from v$log order by 1,2;
GROUP# THREAD# MEMBERS
1 1 1
2 1 1
3 1 1
4 2 1
5 2 1
6 2 1
6 rows selected.
SQL> exit;If all these 6 redo log files belong to one database, then why do they have different max (sequence#) ?
ie. The 'Oldest online log sequence' in Node1 shows 3432 and
''Oldest online log sequence' in Node2 shows 3188 (a difference of 244 )
-- Instance1 in Node1
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination +ORCL_ARCH01/ORCL_ARCH
Oldest online log sequence 3432 ------------------------->
Next log sequence to archive 3434
Current log sequence 3434
-- Instance 2 in Node2
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination +ORCL_ARCH01/ORCL_ARCH
Oldest online log sequence 3188 ------------------------> ( 3432 - 3188 = 244)
Next log sequence to archive 3190
Current log sequence 3190
So, this means Node1 is more loaded than Node2 ?
So, in a properly configured RAC set both instances should show almost similar number for 'Oldest online log sequence' ? (ignoring the manual switches)
Similar Messages
-
Next log sequence to archive in Standby Database (RAC Dataguard Issue)
Hi All,
I just had implemented Data Guard in our server. My primary Database is RAC configured, but it is only single node. The other Instance was removed and converted it to Developement Instance. For the reason I kept the primary as RAC is when I will implement dataguard, my Primary Database is RAC with 7 nodes.
The first test is successful, and I was able to "switchover" from my primary to standby. I failed in the 'FAILOVER" test.
I restore my primary server and redo the setup.
BTW, my standby DB is physical standby.
When I try to switchover again and issue archive log list, below is my output.
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 38
*Next log sequence to archive 0*
Current log sequence 38
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
MOUNTED PHYSICAL STANDBY
===============================================
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 38
*Next log sequence to archive 38*
Current log sequence 38
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
READ WRITE PRIMARY
In my first attempt to switchover before I failed in "FAILOVER" test, I also issue +archive log list+ in both primary and standby database, and if I remember it right, next log sequence on both should be identical. Am I right on this?
Thanks in Advance.
Jay AOr Am i just overthinking on this?
Is dataguard only looking for the current and oldest log sequence? -
How to determine which RAC-instance the appl. is logged onto?
Dear all,
I need to have my application server determine which RAC-
instance is currently active (logged onto). I have a
tnsnames.ora file with a primary-, and secondary RAC-
instance configured, and Failover/Failback between the
instances work fine. However, I would be interested in
determining which instance I am curently using.
Does the Oracle Net Protocol have support for letting me
"read" this out, or...?
Thanks.
Regards, Eldor R.Thank you for the prompt reply.
Is there, in the Oracle Net Protocol, available
function(s) for reading out this information
directly without "parsing" the trace file?
I would like to read out this information from my
application run-time.
Thanks. -
10.2 Doc. says: http://download.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta008.htm#RCMRF106
>
Although the SEQUENCE parameter does not require that THREAD be specified, a given log sequence always implies a thread
>
Does this mean that the log sequence is unique for all threads ? If not what does this sentence mean ?
Note that in my 10.2.0.1 RAC database log sequence is unique only for a given thread:
1 select thread#, sequence#, name, status
2 from v$archived_log
3 where sequence# between 34 and 35
4* order by sequence#
SQL> /
THREAD# SEQUENCE# NAME S
1 34 D
2 34 /u02/fra/RAC/archivelog/2010_08_20/o1_mf_2_34_66xb2ttb_.arc A
1 34 /u02/fra/RAC/archivelog/2010_08_20/o1_mf_1_34_66xcgmbn_.arc A
2 35 /u02/fra/RAC/archivelog/2010_08_20/o1_mf_2_35_66xb48b8_.arc A
1 35 /u02/fra/RAC/archivelog/2010_08_20/o1_mf_1_35_66xcgkk0_.arc A
1 35 D
6 rows selected.I sent mail to the writer and got this in reply:
The sentence will be rewritten to state that: "When you do not explicitly specify a thread number with the sequence number in the command, thread number 1 is used."
----------------- -
Different instructions for disable arch log mode on 11Gr2 RAC server?
Hello all,
I've run into a problem where I've lost my tape drive...and have no sysadmins to help.
I don't want my RAC instances to run our of space and halt, so I'm planning to take them out of archive log mode, and just do exports daily till I can move them or get tape going again.
This is easy enough with a non-clustered instance, but I'm reading around and finding conflicting information for doing it on a RAC system.
In the Oracle® Real Application Clusters Administration and Deployment Guide
11g Release 2 (11.2)...it states in simple terms:
(http://download.oracle.com/docs/cd/E11882_01/rac.112/e16795/rman.htm#i474611)
"In order for redo log files to be archived, the Oracle RAC database must be in ARCHIVELOG mode. You can run the ALTER DATABASE SQL statement to change the archiving mode in Oracle RAC, because the database is mounted by the local instance but not open in any instances. You do not need to modify parameter settings to run this statement."
and thats about it.
I've been researching and found a couple of other non-oracle official guides to this, where they describe a much more involved process, that seems to follow this path:
1. sqlplus into one instance and change the cluster_database=false scope=spfile sid='specific_node_name';
2. Shut down all instances, srvctl stop database -d <instance_name>
3. Startup the instance you changed cluster_database on with sqlplus and startup mount;
4 On this instance you do ALTER DATABASE NOARCHIVELOG;
5. On same instance change the cluster parameter back: alter system set cluster_database=true scope=spfile sid='specific_node_name';
6. Shut down this single instance
7. Start all instances with srvctl start database -d <instance>
I've found references to this at:
http://oracle-dba-yi.blogspot.com/2010/12/enabledisable-archive-log-mode-in-11gr2.html
and
http://www.dba-oracle.com/bk_disable_archive_log_mode.htm
Among other sites. I'm curious why the Oracle documentation on this doesn't mention all these steps?
I'm guessing the longer version is the path I should take, but I wanted to ask here first if this is correct?
I'm on Oracle 11Gr2....hasn't been patched with latest patchset, running on RHEL5, and is a 5 node cluster.
Thank you in advance,
cayenne
Edited by: cayenne on Oct 21, 2011 11:51 AMFiedi Z wrote:
There are couple things you need to consider
- export daily is not a backup strategy
- you're risking your enterprise company by disabling archivelog
your company have 5 RAC nodes I assume this is mid-to-large company, question you might ask to yourself Is your company really desperately don't have any available disk space for you to backup to temporary location / server?
However if you still insist and persistent with your strategy then follow the links you have, that is how to disable archivelog in RAC
CheersThank you everyone for the comments.
This is a DEV environment...and they are planning to move this all to a new facility where we won't have the powerouttages and old defunct equipment.
Right now...I do not have drive space to put all of this. I've informed them of the risks of not having point in time recovery. I really don't see any other choice on this....I don't want to run noarchive either...but I've been without tape to move the logs off for days now..and even with low traffic, I'm afraid they will fill and I'll have databases halting.
I think at this point, and again, this is not production data....I'm going to have to go with daily exports and that will have to do me till I can get these servers 'moved' to a new facility soon.
Again, thank you for the comments!!!
cayenne -
RAC instance, trying to recover UNDO datafile, RMAN gives RMAN-06054
Hello all,
This has been a troublesome instance..a quick bit of background. This was created awhile back by someone else, I inherited this 3 mode RAC clusterof instance1.
I'm exporting out of one database (10G) into this instance1 (11G). When I was about to start the import..I found this instance wouldn't start. Turned out no backup had been going on of this empty instance. I backed up the archive logs to tape to free up the FRA..and things fired up.
I began the import, and found a bunch of errors...basically tellling me that I couldn't access one of the undo tablespaces...datafile problems.
I went to look and saw:
SQL> select a.file_name, a.file_id, b.status, a.tablespace_name
2 from dba_data_files a, v$datafile b
3 where a.file_id = b.file#
4 order by a.file_name;
FILE_NAME FILE_ID STATUS TABLESPACE_NAME
+DATADG/instance1/datafile/sysaux.270.696702269 2 ONLINE SYSAUX
+DATADG/instance1/datafile/system.263.696702253 1 SYSTEM SYSTEM
+DATADG/instance1/datafile/undotbs1.257.696702279 3 ONLINE UNDOTBS1
+DATADG/instance1/datafile/undotbs2.266.696702305 4 ONLINE UNDOTBS2
+DATADG/instance1/datafile/undotbs3.269.696702313 5 RECOVER UNDOTBS3
+DATADG/instance1/datafile/users.268.696702321 6 ONLINE USERS
+DATADG/instance1/l_data_01_01 11 ONLINE L_DATA_01
+DATADG/instance1/s_data_01_01 7 ONLINE S_DATA_01
+DATADG/instance1/s_data_01_02 8 ONLINE S_DATA_01
+INDEXDG/instance1/l_index_01_01 12 ONLINE L_INDEX_01
+INDEXDG/instance1/s_index_01_01 9 ONLINE S_INDEX_01
FILE_NAME FILE_ID STATUS TABLESPACE_NAME
+INDEXDG/instance1/s_index_01_02 10 ONLINE S_INDEX_01
There is is, file #5.
So, I went into RMAN to try to restore/recover:
RMAN> restore datafile 5;
Starting restore at 06-APR-10
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=222 instance=instance1 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: NMO v4.5.0.0
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=222 instance=instance1 device type=DISK
creating datafile file number=5 name=+DATADG/instance1/datafile/undotbs3.269.696702313
restore not done; all files read only, offline, or already restored
Finished restore at 06-APR-10
RMAN> recover datafile 5;
Starting recover at 06-APR-10
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1
starting media recovery
RMAN-06560: WARNING: backup set with key 343546 will be read 2 times
available space of 8315779 kb needed to avoid reading the backup set multiple times
unable to find archived log
archived log thread=1 sequence=1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 04/06/2010 14:33:07
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 1 and starting SCN of 16016
This is all on ASM, and am a bit of a newb with that. I bascially have no data I'm worried about losing, I just need to get everything 'on the air' so I can import successfully, and let users on this instance. I've set up the backups in GRID now....so, it will be backed up on the future, but what is the quickest, most efficient way to get this UNDO tablespace datafile recovered?
Thank you,
cayenneHemant K Chitale wrote:
SET UNTIL SEQUENCE 27wouldn't work if the Recovery requires Sequence 1 and it is missing.
Hemant K ChitaleOops...meant to have start and set until both to "1"
However, I see what you mean. It seems I cannot find the file on tape.
Since the RAC instance hasn't yet had any data put into it, I'm thinking it might be best to just blow it away, and recreate everything.
Trouble is, I'm a bit new at RAC and ASM. I was thinking the best route might be to use DBCA to remove the database...? Would this not take care of removing all the datafiles from all the ASM instances on the RAC..as well as all the other directories, etc on all 3x nodes?
I've already used the dbca to create templates of this instance, so recreation shouldn't be too difficult (although it will be my first RAC creation)...
Thank you in advance for the advice so far,
cayenne -
Rconfig: converting a single instance to RAC instance
Hi,
I am trying to use the "rconfig" utility to convert a single instance to a RAC instance in an existing RAC cluster.
I have modified the .xml file, and am trying to run the conversion from the 1st node in the 2 node cluster (where the single instance resides).
The only error message i seem to be getting is below:
<Response>
<Result code="1" >
Operation Failed
</Result>
<ErrorDetails>
ORCL_DATA_ORCLCLN The specified diskgroup is not mounted.
</ErrorDetails>
</Response>
</Convert>
</ConvertToRAC></RConfig>
Now I dont really understand why I would be getting that message as the instance is up and running and ASM disk group is mounted on node1 at the time i run the rconfig command, though its not clear to me if I also need to somehow mount the ASM disk group on the second node prior to running the rconfig command??
node1:
bash-3.00$ asmcmd -p
ASMCMD [+] > lsdg
State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
MOUNTED EXTERN N N 512 4096 1048576 10181 7442 0 7442 0 ORCL_DATA_ORCLCLN/
node2:
ASMCMD [+] > lsdg
State Type Rebal Unbal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Name
I have attached the output of the alert log during the rconfig conversion of the target database, but it all looks pretty standard to me (keep in mind i am an oracle novice!).
alert.log
Completed: ALTER DATABASE OPEN
Thu Jul 23 13:51:55 2009
Shutting down instance (abort)
License high water mark = 2
Instance terminated by USER, pid = 15030
Thu Jul 23 13:51:57 2009
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Interface type 1 e1000g1 10.128.113.0 configured from OCR for use as a cluster interconnect
Interface type 1 e1000g0 10.128.113.0 configured from OCR for use as a public interface
Picked latch-free SCN scheme 2
Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/10.2.0/db_1/dbs/arch
Autotune of undo retention is turned on.
IMODE=BR
ILAT =18
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.2.0.
System parameters with non-default values:
processes = 150
__shared_pool_size = 121634816
__large_pool_size = 4194304
__java_pool_size = 4194304
__streams_pool_size = 0
sga_target = 440401920
control_files = +ORCL_DATA_ORCLCLN/control01.ctl
db_block_size = 8192
__db_cache_size = 306184192
compatible = 10.2.0.2.0
log_archive_format = %t_%s_%r.dbf
db_file_multiblock_read_count= 16
cluster_database = FALSE
cluster_database_instances= 1
db_recovery_file_dest_size= 2147483648
norecovery_through_resetlogs= TRUE
undo_management = AUTO
undo_tablespace = UNDOTBS1
remote_login_passwordfile= EXCLUSIVE
db_domain = netapp.com
job_queue_processes = 10
background_dump_dest = /u01/app/oracle/admin/orcldb/bdump/ORCLCLN
user_dump_dest = /u01/app/oracle/admin/orcldb/udump/ORCLCLN
core_dump_dest = /u01/app/oracle/admin/orcldb/cdump/ORCLCLN
db_name = ORCLCLN
open_cursors = 300
pga_aggregate_target = 145752064
Cluster communication is configured to use the following interface(s) for this instance
10.128.113.200
Thu Jul 23 13:51:59 2009
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=15085
DIAG started with pid=3, OS id=15091
PSP0 started with pid=4, OS id=15094
LMON started with pid=5, OS id=15097
LMD0 started with pid=6, OS id=15102
MMAN started with pid=7, OS id=15112
DBW0 started with pid=8, OS id=15114
LGWR started with pid=9, OS id=15116
CKPT started with pid=10, OS id=15125
SMON started with pid=11, OS id=15128
RECO started with pid=12, OS id=15130
CJQ0 started with pid=13, OS id=15134
MMON started with pid=14, OS id=15143
MMNL started with pid=15, OS id=15146
Thu Jul 23 13:52:03 2009
lmon registered with NM - instance id 1 (internal mem no 0)
Thu Jul 23 13:52:04 2009
Reconfiguration started (old inc 0, new inc 2)
List of nodes:
0
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Resources and enqueues cleaned out
Resources remastered 0
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Post SMON to start 1st pass IR
Submitted all GCS remote-cache requests
Post SMON to start 1st pass IR
Reconfiguration complete
Thu Jul 23 13:52:04 2009
ALTER DATABASE MOUNT
Thu Jul 23 13:52:04 2009
Starting background process ASMB
ASMB started with pid=17, OS id=15157
Starting background process RBAL
RBAL started with pid=18, OS id=15169
Thu Jul 23 13:52:09 2009
SUCCESS: diskgroup ORCL_DATA_ORCLCLN was mounted
Thu Jul 23 13:52:13 2009
Setting recovery target incarnation to 2
Thu Jul 23 13:52:13 2009
Successful mount of redo thread 1, with mount id 4437636
Thu Jul 23 13:52:13 2009
Database mounted in Exclusive Mode
Completed: ALTER DATABASE MOUNT
Thu Jul 23 13:52:14 2009
ALTER DATABASE OPEN
Thu Jul 23 13:52:14 2009
Beginning crash recovery of 1 threads
Thu Jul 23 13:52:14 2009
Started redo scan
Thu Jul 23 13:52:14 2009
Completed redo scan
105 redo blocks read, 32 data blocks need recovery
Thu Jul 23 13:52:14 2009
Started redo application at
Thread 1: logseq 2, block 929
Thu Jul 23 13:52:15 2009
Recovery of Online Redo Log: Thread 1 Group 2 Seq 2 Reading mem 0
Mem# 0 errs 0: +ORCL_DATA_ORCLCLN/redo_2_1.log
Mem# 1 errs 0: +ORCL_DATA_ORCLCLN/redo_2_0.log
Thu Jul 23 13:52:15 2009
Completed redo application
Thu Jul 23 13:52:15 2009
Completed crash recovery at
Thread 1: logseq 2, block 1034, scn 613579
32 data blocks read, 25 data blocks written, 105 redo blocks read
Thu Jul 23 13:52:15 2009
Thread 1 advanced to log sequence 3
Thread 1 opened at log sequence 3
Current log# 1 seq# 3 mem# 0: +ORCL_DATA_ORCLCLN/redo_1_1.log
Current log# 1 seq# 3 mem# 1: +ORCL_DATA_ORCLCLN/redo_1_0.log
Successful open of redo thread 1
Thu Jul 23 13:52:15 2009
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Thu Jul 23 13:52:15 2009
SMON: enabling cache recovery
Thu Jul 23 13:52:17 2009
Successfully onlined Undo Tablespace 1.
Thu Jul 23 13:52:17 2009
SMON: enabling tx recovery
Thu Jul 23 13:52:17 2009
Database Characterset is WE8ISO8859P1
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=21, OS id=15328
Thu Jul 23 13:52:23 2009
Completed: ALTER DATABASE OPEN
Any help would be greatly appreciated!!!!Ok,
So I managed to get the disk group mounted on the second node, and re-ran the rconfig process.
I got a little further, but encountered another error which is displayed below:
-bash-3.00$ rconfig racconv.xml
<?xml version="1.0" ?>
<RConfig>
<ConvertToRAC>
<Convert>
<Response>
<Result code="1" >
Operation Failed
</Result>
<ErrorDetails>
/u01/app/oracle/product/10.2.0/db_1/dbs Data File is not shared across all nodes in the cluster
</ErrorDetails>
</Response>
</Convert>
</ConvertToRAC></RConfig>
I am not using a shared oracle home, each node in the cluster has its own oracle installation residing on local disk. Is a shared oracle home a pre-requisite for usin rconfig?
I have provided the .xml file I am using below:
-bash-3.00$ cat racconv.xml
<?xml version="1.0" encoding="UTF-8"?>
<n:RConfig xmlns:n="http://www.oracle.com/rconfig"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.oracle.com/rconfig">
<n:ConvertToRAC>
<!-- Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY -->
<n:Convert verify="YES">
<!--Specify current OracleHome of non-rac database for SourceDBHome -->
<n:SourceDBHome>/u01/app/oracle/product/10.2.0/db_1</n:SourceDBHome>
<!--Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome -->
<n:TargetDBHome>/u01/app/oracle/product/10.2.0/db_1</n:TargetDBHome>
<!--Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion -->
<n:SourceDBInfo SID="ORCLCLN">
<n:Credentials>
<n:User>oracle</n:User>
<n:Password>password</n:Password>
<n:Role>sysdba</n:Role>
</n:Credentials>
</n:SourceDBInfo>
<!--ASMInfo element is required only if the current non-rac database uses ASM Storage -->
<n:ASMInfo SID="+ASM1">
<n:Credentials>
<n:User>oracle</n:User>
<n:Password>password</n:Password>
<n:Role>sysdba</n:Role>
</n:Credentials>
</n:ASMInfo>
<!--Specify the list of nodes that should have rac instances running. LocalNode should be the first node in this nodelist. -->
<n:NodeList>
<n:Node name="sol002"/>
<n:Node name="sol003"/>
</n:NodeList>
<!--Specify prefix for rac instances. It can be same as the instance name for non-rac database or different. The instance number will be attached to this prefix. -->
<n:InstancePrefix>ORCLCLN</n:InstancePrefix>
<!--Specify port for the listener to be configured for rac database.If port="", alistener existing on localhost will be used for rac database.The listener will be extended to all nodes in the nodelist -->
<n:Listener port=""/>
<!--Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type. -->
<n:SharedStorage type="ASM">
<!--Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path. -->
<n:TargetDatabaseArea></n:TargetDatabaseArea>
<!--Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area. -->
<n:TargetFlashRecoveryArea></n:TargetFlashRecoveryArea>
</n:SharedStorage>
</n:Convert>
</n:ConvertToRAC>
</n:RConfig> -
Difference between v$instance and v$thread
Hallo,
What is the difference between v$instance and v$thread?
Thanks,Well, a mere description of the views in the official documentation isn't going to actually help you very much more than you could do with a simple "desc V$instance/desc V$thread", I suspect.
The answer I think you might be looking for is that a thread is not the same thing as an instance. In fact, we usually talk about "a thread of redo", which gets much closer to the issue here. Yes, it is true that one instance can only generate one 'thread of redo', so one instance = one thread, and there's a temptation to extrapolate from that and assume they're the same thing.
But they're not. An instance is a set of memory structures and processes. It has a name. It gets started and stopped. In can be running in various states (nomount, mount, open, read-only, primary, secondary etc).
A thread of redo is a history of redo generated by an instance. It has ever-incrementing checkpoint change numbers, times of last checkpoints, sequence numbers, blocks of redo that are forever being flushed to different parts of the current redo log. And so on.
Thus, V$INSTANCE tells you the state of the instance and V$THREAD tells you all you could ever want to know about where you're up to in the writing-redo stakes. -
We have Oracle Databases 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production running on Linux x86 64-bit
It is a two instance RAC running on servers, let's say - node1 and node2 . We are using ASM
Node 1 has an ASM instance ASM1 and Node 2 has an ASM instance ASM2.
There are 3 11g rdbms databases running on these nodes.
Instances db11,db21,db31 are running on node 1 and corresponding RAC instances db12,db22,db32 are running on node 2
The listeners are configured exactly the same on both nodes.
On Node 2, when I do
[oracle@node2 admin]$ lsnrctl status
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 19-NOV-2010 14:34:34
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 15-NOV-2010 13:33:49
Uptime 4 days 1 hr. 0 min. 44 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /opt/oracle/product/11.2.0/grid/network/admin/listener.ora
Listener Log File /opt/app/oracle/diag/tnslsnr/node2/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.7.42)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.7.52)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "db1" has 2 instance(s).
Instance "db11", status READY, has 1 handler(s) for this service...
Instance "db12", status READY, has 2 handler(s) for this service...
Service "db2" has 2 instance(s).
Instance "db21", status READY, has 1 handler(s) for this service...
Instance "db22", status READY, has 2 handler(s) for this service...
Service "db3" has 2 instance(s).
Instance "db31", status READY, has 1 handler(s) for this service...
Instance "db32", status READY, has 2 handler(s) for this service...
The command completed successfullyThe above looks good which is what should be the case.
Now, if I try doing the same on node 1 (and this is where I am concerned)
[oracle@node1 admin]$ lsnrctl status
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 19-NOV-2010 14:41:45
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 19-NOV-2010 03:20:44
Uptime 0 days 11 hr. 21 min. 1 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /opt/oracle/product/11.2.0/grid/network/admin/listener.ora
Listener Log File /opt/app/oracle/diag/tnslsnr/node1/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.7.41)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.7.51)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "db1" has 1 instance(s).
Instance "db11", status READY, has 2 handler(s) for this service...
Service "db2" has 1 instance(s).
Instance "db21", status READY, has 2 handler(s) for this service...
Service "db3" has 1 instance(s).
Instance "db31", status READY, has 2 handler(s) for this service...
The command completed successfullyThe node 1 does not seem to report the fact that each of these 3 databases have 2 instances and also does not list its other instances besides the ones running on it. Any ideas or suggestions as to where to look?One problem is here
The listeners are configured exactly the same on both nodes.This is incorrect, as the listeners need to have different names.
They need to have different names as listener_node1 is the remote_listener for node 2 and vice versa.
The correct set up is:
The listener name is node dependent.
The listener definition
listener_<node>=(host=)(protocol=)(port=)
is included in tnsnames.ora
the remote_listener parameter is set to the listener of the other node.
Using hardcoded IPs in listener.ora and tnsnames.ora is a bad idea.
Not sure why you don't ask this question in the RAC forum.
Sybrand Bakker
Senior Oracle DBA -
Dear all,
My version is 11.2.0.2.5 one of my rac instance crashes with message ORA-00240: control file enqueue held for more than 120 seconds. Received an instance abort message from instance 1.
here are the contents of alert log file
IPC Send timeout detected. Receiver ospid 27423 [[email protected] (LMON)]
2013-03-22 22:30:05.644000 -07:00
Errors in file /u01/app/oracle/diag/rdbms/lfgoimdb/LFGoimdb2/trace/LFGoimdb2_lmon_27423.trc:
2013-03-22 22:31:08.734000 -07:00
Errors in file /u01/app/oracle/diag/rdbms/lfgoimdb/LFGoimdb2/trace/LFGoimdb2_arc2_27691.trc (incident=15905):
ORA-00240: control file enqueue held for more than 120 seconds
Incident details in: /u01/app/oracle/diag/rdbms/lfgoimdb/LFGoimdb2/incident/incdir_15905/LFGoimdb2_arc2_27691_i15905.trc
2013-03-22 22:31:13.409000 -07:00
Received an instance abort message from instance 1
Please check instance 1 alert and LMON trace files for detail.
LMS0 (ospid: 27427): terminating the instance due to error 481
System state dump requested by (instance=2, osid=27427 (LMS0)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/lfgoimdb/LFGoimdb2/trace/LFGoimdb2_diag_27413.trc
2013-03-22 22:31:18.376000 -07:00
Dumping diagnostic data in directory=[cdmp_20130322223113], requested by (instance=2, osid=27427 (LMS0)), summary=[abnormal instance termination].
ORA-1092 : opitsk aborting process
Instance terminated by LMS0, pid = 27427Thanks for reply,
My redo logs size is default 50mb.There is currently no load on the system since we are not using this environment for time being.The log switches are averaged to be 8 per day.I think Increasing the size of redo will further cause the problems since the archiver may again hold lock for more time.
Since there is no dedicated connection between the nodes and storage ,So increasing the hardware and network configuration is only solution to this? Or I am still missing something...
As far as configuration is considered i cannot add more resources to this environment.How can I solve this issue? -
Very Urgent: Thread 1 cannot allocate new log, sequence 6 : Script stuck
I am running one script, it is stuck with log file showing:
Thread 1 cannot allocate new log, sequence 6
All online logs needed archiving
Current log# 7 seq# 5 mem# 0: /u13/sjmarte/oradata/redo7a.log
I have checked the database: it is in archivelog mode
Checked the archvie dest: we have space in TB's
And almost 24GB's occupied by log files each of 6GB's.
====================================
Can any one please help us how to get rid of this issueSorry I m new to this forum.
We dont have a seperate FRA in this case.
This is the log file information:
Wed May 18 13:28:23 2011
Thread 1 advanced to log sequence 5 (LGWR switch)
Current log# 7 seq# 5 mem# 0: /u13/sjmarte/oradata/redo7a.log
Current log# 7 seq# 5 mem# 1: /u14/sjmarte/oradata/redo7b.log
Wed May 18 13:31:58 2011
ORACLE Instance sjmarte - Can not allocate log, archival required
Wed May 18 13:31:58 2011
Thread 1 cannot allocate new log, sequence 6
All online logs needed archiving
Current log# 7 seq# 5 mem# 0: /u13/sjmarte/oradata/redo7a.log
Current log# 7 seq# 5 mem# 1: /u14/sjmarte/oradata/redo7b.log
Here is archive log list
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /backup/oracle/sjmarte/archive
Oldest online log sequence 2
Next log sequence to archive 2
Current log sequence 5
Edited by: 784786 on May 18, 2011 11:35 AM -
Thread 1 cannot allocate new log, sequence 1558 Checkpoint not complete
hi,
i m working on oracle 10g rac database on aix machine . i m getting this error on peck time
Thread 1 cannot allocate new log, sequence 1558 Checkpoint not complete
i read lots of documents and they asked to increase size of redo file or add more redo files.
can u plz describe me y m i getting this error ? & how adding redo file can help in this error.
thxswhen yours current redo log filled and then started to switch another log then checkpoint occurs ,this checkpoint started to write dirty buffer from buffer cache to datafile , you cannot reuse this logfile unless checkpoint process writes alls dirty buffer from buffer cache to disk which contained this redo log file.If you attempt to reuse the same log file which cause to checkpoint upon log switch then you will get this error.
Typically this error comes where yours number of redo log switches occuring too frequently or you have less number of redo logs.
lets say if you have 2 redolog file A and B,yours A redolog filled and then oracle switch from redo log A to B,checkpoint occurs,DBWRn started to write dirty buffer to disk meanwhile yours redo log B also get filled antoher log switch occurs to be attempt to reuse redo log file A ,but redo log A will not be entertain unless the previous checpoint completed to write alls dirty block from buffer cache to hard disk which is contained thats redo log A.
Adding redo log will be helpful in this case that redo log will switch to another new added redo log say C and A log file will get more time to be completed checkpoint which he/she contains contents.
Khurram -
Difference Between central instance And application instance
Hi every body can any one tell me that
what is the difference between Central instance and Application instance.
If i am using 4.7 ee with orcale data base.Check these links
http://oreilly.com/catalog/sapadm/chapter/ch01.html
Basically these terms comes when you are working on live servers where all the users log into to do their daily work.
We says when we want to distribute the workload on servers we requires central instance and application servers.
Normally it is not known to common users where they are logging into...but they can login directly usign the specific Instance details of servers.
Please see this also
http://help.sap.com/saphelp_nw2004s/helpdata/en/c4/3a64e8505211d189550000e829fbbd/frameset.htm -
What is difference among F5 and F6 and F7 in ABAP coding?
what is difference among F5 and F6 and F7 in ABAP coding ? Can u give me any example regarding this thread ?
Some additional info which can be quite helpful.
a watchpoint is an indicator in a program that tells the ABAP runtime processor to interrupt the program at a particular point. Unlike breakpoints, however, watchpoints are not activated until the contents of a specified field change. Watchpoints, like dynamic breakpoints, are user-specific, and so do not affect other users running the same program. You can only define watchpoints in the Debugger.
Use
You set watchpoints in the Debugger to monitor the contents of specific fields. They inform you when the value of a field changes. When the value changes, the Debugger interrupts the program.
Features
· You can set up to five watchpoints in a program.
See also Setting Watchpoints.
· You can also specify the conditions under which a watchpoint is to become active.
· You can specify a logical link for up to five (conditional) watchpoints.
See also Specifying Logical Links.
· You can define watchpoints as either local or global. If you define a global watchpoint, it is active in all called programs. Local watchpoints are only active in
the specified program.
· You can change and delete watchpoints.
See Changing Watchpoints
· You can use watchpoints to display changes to the references of strings, data and object references, and internal tables.
See Memory Monitoring with Watchpoints
Breakpoints
Apart from being able to execute an ABAP program in the Debugger, you can also start the Debugger call by the choosing a breakpoint. This is achieved by setting one or more of these breakpoints in the program. A breakpoint is a signal at a particular point in the program that tells the ABAP runtime processor to interrupt processing and start the Debugger. The Debugger is activated when the program reaches this point.
There is also a special kind of breakpoint called a watchpoint. When you use watchpoints, the Debugger is not activated until the contents of a particular field change. For more information, refer to the chapter Watchpoints.
Breakpoint Variants
The Debugger contains different breakpoint variants:
Static
A user-specific breakpoint is inserted in the source code as an ABAP statement using the keyword BREAK-POINT. A non user-specific breakpoint is set in the ABAP Editor using the BREAK user name statement.
Directly set
dynamic breakpoints
Can be set in the ABAP Editor or the Debugger by double-clicking a line, for example. Dynamic breakpoints are always user-specific, and are deleted when you log off from the R/3 System.
Breakpoints
at statements
The Debugger stops the program immediately before the specified statement is executed.
Breakpoints
at subroutines
The Debugger stops the program immediately before the specified subroutine is called.
Breakpoints at function modules
The Debugger stops the program immediately before the specified function module is called.
Breakpoints at methods
The Debugger stops the program immediately before the specified method is called.
Breakpoints at exceptions and system exceptions
The Debugger stops the program immediately after a system exception, that is, after a runtime error has been intercepted.
Static Breakpoints
Static breakpoints are always user-independent if there is no specification of a user name. Once a user has inserted the statement BREAK-POINT or BREAK name in an ABAP program, the system always interrupts the program at that point for that user or only for the user name. This procedure is only useful in the development phase of an application when program execution is always to be interrupted at the same place. For more information, refer to the chapter Static Breakpoints.
In HTTP sessions, a static breakpoint is skipped if you did not set additional dynamic HTTP breakpoints in the editor of a BSP page. Instead, a corresponding system log entry is written, which can be checked using transaction SM21.
Dynamic Breakpoints
Dynamic breakpoints are user-specific. Therefore, you should use them if you only want the program to be interrupted when you run it yourself, not when it is being executed by other users. All dynamic breakpoints are deleted when you log off from the R/3 System.
Dynamic breakpoints are more flexible than static breakpoints because you can deactivate or delete them at runtime. They have the following advantages:
· You do not have to change the program code.
· You can set them even when the program is locked by another programmer.
· You can define a counter that only activates the breakpoint after it has been reached.
Special dynamic breakpoints are useful when you want to interrupt a program directly before a particular ABAP statement, a subroutine, or an event, but do not know exactly where to find it in the source code. Event here is used to refer to the occurrence of a particular statement, for example, or calling up a method. Special dynamic breakpoints are user-specific. You can only set them in the Debugger. For more information, refer to the chapter Dynamic Breakpoints.
In HTTP sessions, the system stops both at static and dynamic breakpoints if a dynamic breakpoint was set in the editor of a BSP page before program execution.
Lifetime and Transfer of Breakpoints
A static breakpoint remains intact as long as the BREAK-POINT or BREAK-POINT name statement is not removed from the source code. Without saving, dynamic breakpoints only remain intact in the relevant internal session. However, they remain in effect during the entire user session if they are saved by choosing the menu path Breakpoints ® Save in the ABAP Debugger. For more details on the subject of user sessions and modes, refer to Modularization Techniques in the ABAP keyword documentation.
If you call an HTTP session during a user session, only the HTTP breakpoints are loaded when the HTTP session is started. You activate HTTP debugging in the ABAP Editor by choosing Utilities ® Settings ® HTTP Debugging. Depending on the setting, the system then displays either the HTTP or standard breakpoints in the Editor.
If you call an update session during a user session, breakpoints that were defined beforehand in the calling processing unit are copied to the new update session, where they can be displayed under Breakpoints. If, in the ABAP Debugger, you check Update Debugging under Settings and then, for example, call the update module func using CALL FUNCTION func IN UPDATE TASK, a new window is opened in which you can debug this function module in the update session. All the breakpoints that were set in the calling processing unit can also be processed here.
we can keep them at :
Statements
Subroutines
Function Module Calls
at Methods
System Exceptions
break point :
we can start debugging from that point or if we keep break point at some place we can directly got ot htat point using f6.
watch point: for example if we have to check the output for 4000 records based on a field value i.e.for vendor number 'in'we have to check then we will create watchpoint on field LIFNR value '2000'. then we can directly go to vendor whose numbe ris 2000 -
hi everyone,
I am looking at our SB system, and try to do only redo log backup. however it comes back with error message: The current redo log sequence number is not greater than the sequence number of the last saved offline redo log file.
So, we suspect that since last refresh, we may reset the sequence back from 1. Any one know how to solve this problem? Thanks in advance.
AmyHi,
Is your probh solved?
Can u pl tell if u executed a recovery of database earlier?
Pl check if there was a case of reset logs.
To do this run BRTOOLS-->Instance Mgt->Show Inst Status.
This will show if there was a case of reset logs.
If it is so try the following:
Shut down sap and the database.
Look for a folder named oraarch
Oracle/sid/oraarch
Copy all the archive files to another folder and delete all the arc files from the oraarch folder.
The name of the arch files will be like <SID>ARCHARC123456.001
After copying to another location and subsequently deleting them from oraarch folders start up oracle in mount mode.
Then issue the following sql
alter database open resetlogs;
then start sap.
NB:Dont forget to take a complete backup before and after diong this.
Hope this will help.
Most probably your database contains two icarnations of redo log files so this error is comming.
regards
Maybe you are looking for
-
How to get the user interface in english ?
Hello I have installed SQLdevelopper 1.5.5. The user interface is in french (probably because my PC is in french). I would like to switch in English; How to do ? Thank you ! Christian
-
Site skipping and jumping in other browsers
My site skips and jumps around when you scroll in IE and other browsers. The site looks great in safari..is there any way to fix that?
-
Clone column groups with custom cells
I have set up custom bordered cells meaning to copy them along with the border. Instead only the contents are copied, when I paste the columns the borders are ignored. How do I copy and paste blocks of cells including the borders?
-
I have a iphone 6 plus.I used to have a iphone 4.I have been using apple products for a long time but I still dont understand how to transfer a video in a orientated way to my lap top.Sometimes I turn to horizontal sometimes perpendicular.But when I
-
Experts, The way that order line item status is determined in SAP is causing confusion with our customers how are using our ECommerce for ERP module. The line item status does not take into account the line item's delivery statuses. For example, a