RAC node startup problem

Hi,
Hi We have a 2 node RAC setup. Its been working fine. But After an improper machine shutdown,Second node in the RAC is not starting up. Primary node is coming up fine.
In the alertlog of the second node...it gives the message
"lmon registered with NM - instance id 2 (internal mem no 1)"
After the above message it is not progressing.
In the LMON trace file following entries lines are written.
*** SESSION ID:(3.1) 2006-10-17 16:51:25.966
GES IPC: Receivers 3 Senders 3
GES IPC: Buffers Receive 1000 Send (i:1430 b:1430) Reserve 1000
GES IPC: Msg Size Regular 396 Batch 2048
Batch msg size = 2048
Batching factor: enqueue replay 48, ack 53
Batching factor: cache replay 34 size per lock 56
kjxggin: receive buffer size = 32768
kjxgmin: SKGXN ver (2 1 Oracle 9i Reference CM)
*** 2006-10-17 16:51:29.512
kjxgmrcfg: Reconfiguration started, reason 1
kjxgmcs: Setting state to 0 0.
*** 2006-10-17 16:51:29.527
Name Service frozen
kjxgmcs: Setting state to 0 1.
kjfcpiora: publish my weight 19490
*** 2006-10-17 16:56:38.362
kjxgfipccb: msg 0x0xbc063dc, mbo 0x0xbc063d8, type 24, ack 0, ref 0, stat 3
kjxgfipccb: Send timed out, stat 3 inst 0, type 24, tkt (0,0)
Submitting asynchronized dump request [2]
gsdctl is running fine.
oracm has also come up without any problems.
cluster_database=true in both the instances.
Can any body please let me know what can be the problem.
Thanks in advance.
Regards,
Aditya.

If you have access to metalink - 276434.1 is the document id you need to follow.
syntax to change vip is:
srvctl modify nodeapps -n <NODENAME> -A <NEW_VIP_IP_OR_NAME>/<SUBNET_IP>/<INTERFACE_NAME>

Similar Messages

  • RAC node restarting!

    hi
    one of our RAC environment keep restarting.
    i've disable the init.cssd, init.crs, init.evmd in the /etc/inittab in order to check the logs.
    this is the situation:
    crsd.log:
    2009-02-04 00:09:00.118: [ COMMCRS][9]clsc_connect: (8000000100318640) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_node1_loud))
    2009-02-04 00:09:00.132: [ CSSCLNT][1]clsssInitNative: connect failed, rc 9
    2009-02-04 00:09:00.134: [  CRSRTI][1]32CSS is not ready. Received status 3 from CSS. Waiting for good status ..
    2009-02-04 00:09:08.016: [    CRSD][1]32Daemon Version: 10.2.0.2.0 Active Version: 10.2.0.2.0
    2009-02-04 00:09:08.016: [    CRSD][1]32Active Version and Software Version are same
    2009-02-04 00:09:08.017: [ CRSMAIN][1]32Initializing OCR
    2009-02-04 00:09:08.037: [  OCRRAW][1]proprioo: for disk 0 (/dev/rdsk/ora_ocr_raw), id match (1), my id set (752560621,1028247821) total id sets (1), 1st set
    (752560621,1028247821), 2nd set (0,0) my votes (2), total votes (2)
    2009-02-04 00:09:08.140: [ CSSCLNT][24]clssgsGroupJoin: CSS has not reached fatal mode.Registration is not yet safe. Retrying
    ocssd.log:
    [    CSSD]2009-02-03 21:52:08.651 [9] >USER: clssnmHandleUpdate: NODE 1 (node1l) IS ACTIVE MEMBER OF CLUSTER
    [    CSSD]2009-02-03 21:52:08.651 [9] >TRACE: clssnmHandleUpdate: diskTimeout set to (200000)ms
    [    CSSD]2009-02-03 21:52:08.651 [16] >TRACE: clssnmWaitForAcks: done, msg type(15)
    [    CSSD]2009-02-03 21:52:08.651 [16] >TRACE: clssnmDoSyncUpdate: Sync Complete!
    [    CSSD]2009-02-03 21:52:08.722 [1] >USER: NMEVENT_SUSPEND [00][00][00][00]
    [    CSSD]2009-02-03 21:52:08.724 [17] >TRACE: clssgmReconfigThread: started for reconfig (1)
    [    CSSD]2009-02-03 21:52:08.749 [17] >USER: NMEVENT_RECONFIG [00][00][00][02]
    [    CSSD]2009-02-03 21:52:08.749 [17] >TRACE: clssgmEstablishConnections: 1 nodes in cluster incarn 1
    [    CSSD]2009-02-03 21:52:08.751 [13] >TRACE: clssgmPeerListener: connects done (1/1)
    [    CSSD]2009-02-03 21:52:08.752 [17] >TRACE: clssgmEstablishMasterNode: MASTER for 1 is node(1) birth(1)
    [    CSSD]2009-02-03 21:52:08.752 [17] >TRACE: clssgmChangeMasterNode: requeued 0 RPCs
    [    CSSD]2009-02-03 21:52:08.752 [17] >TRACE: clssgmMasterCMSync: Synchronizing group/lock status
    [    CSSD]2009-02-03 21:52:08.752 [17] >TRACE: clssgmMasterSendDBDone: group/lock status synchronization complete
    [    CSSD]CLSS-3000: reconfiguration successful, incarnation 1 with 1 nodes
    [    CSSD]CLSS-3001: local node number 1, master node number 1
    [    CSSD]2009-02-03 21:52:08.753 [17] >TRACE: clssgmReconfigThread: completed for reconfig(1), with status(1)
    [    CSSD]2009-02-03 21:52:08.863 [10] >TRACE: clssgmClientConnectMsg: Connect from con(80000001008fd2a0) proc(8000000100ae26a8) pid() proto(10:2:1:1)
    [    CSSD]2009-02-03 21:52:08.864 [10] >TRACE: clssgmClientConnectMsg: Connect from con(8000000100ae0128) proc(8000000100ae2a10) pid() proto(10:2:1:1) from con(8000000100aa32c0) proc(8000000100aa5b90) pid() proto(10:2:1:1)
    alertlog:
    [cssd(2535)]CRS-1601:CSSD Reconfiguration complete. Active nodes are node1 .
    2009-02-03 23:55:20.821
    [cssd(2575)]CRS-1605:CSSD voting file is online: /dev/rdsk/ora_voting_raw. Detai ls in /work/crs/product/10.2/crs/log/lourmel/cssd/ocssd.log.
    2009-02-03 23:55:28.376
    evmd.log:
    Oracle Database 10g CRS Release 10.2.0.2.0 Production Copyright 1996, 2004, Oracle. All rights reserved
    2009-02-04 00:08:58.331: [    EVMD][1]32EVMD waiting for CSS to be ready err = 3
    2009-02-04 00:08:59.939: [ COMMCRS][9]clsc_connect: (800000010007d658) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_node1_loud))
    2009-02-04 00:08:59.946: [ CSSCLNT][1]clsssInitNative: connect failed, rc 9
    2009-02-04 00:08:59.948: [    EVMD][1]32EVMD waiting for CSS to be ready err = 3
    2009-02-04 00:09:07.596: [ CSSCLNT][1]clssgsGroupJoin: CSS has not reached fatal mode.Registration is not yet safe. Retrying
    syslog:
    Feb 4 00:08:41 lourmel syslog: Oracle Cluster Ready Services starting up automatically.
    Feb 4 00:08:45 lourmel sfd[2153]: starting the daemon.
    Feb 4 00:08:45 lourmel su: + tty?? root-orac
    Feb 4 00:08:45 lourmel krsd[2152]: Delay time is 300 seconds
    Feb 4 00:08:43 lourmel syslog: Oracle Cluster Ready Services starting up automatically.
    Feb 4 00:08:52 lourmel above message repeats 2 times
    Feb 4 00:08:52 lourmel syslog: Cluster Ready Services completed waiting on dependencies.
    Feb 4 00:08:53 lourmel syslog: Running CRSD with TZ =
    when i checked(befor the restart) the command crs_stat i got the message:
    ORA-0184: Cannot communicate wirh CRS
    crsctl check crs gives us:
    Failure 1 contacting CSS daemon
    Cannot communicate with CRS
    Cannot communicate with EVM
    as i said befor, the machine always restarting
    anyone have an idea?? please

    Dear All,
    I recently upgrade the Few RAC setups with Oracle 10g Patchset 3 (10.2.0.4) on Linux Servers
    In one of the RAC setup, found servers are rebooting daily. The same setup was working fine and problem started only after applying the Patchset. Checked all the logs and Found nothing relevant.
    Then i checked the things which added with this Patchset.
    The Most interesting found , Oracle Added a New Daemon- oprocd.
    # ps -efl | grep oprocd
    4 S root 6440 6063 0 -40 - - 2114 - Mar03 ? 00:00:00 /opt/oracle/product/10.2.0/crs/bin/oprocd.bin run -t 1000 -m 500 -hsi 5:10:50:75:90 -f
    These are Interesting Points about above line
    1.This Process is running by root user
    2. With Highest Priority -40
    3. Probing every Seconds (t 1000)
    4. waiting CPU response for 500 Milliseconds ( -m 500 means margin time is 500 Milli Seconds)
    5. Process status is Fatal (-f)
    Now I am concluding these points- This daemon will probe cpu every second and wait for response within 500 Mill seconds. If in the 500 Milli second not getting any response from the cpu, will assume the CPU is hang and try to Reboot the Machine. The OPERATING SYSTEM will not get enough time to write the system logs and server reboots.
    So the solution is increase the Margin time for 500 Milli second to 10 seconds.
    These are following steps to increase the Margin time.
    Please Remember- The Modification process need Downtime and You need to stop cluster service in all member nodes.
    1. Stop The CRS Process
    #crsctl stop crs
    #<CRS_HOME>/bin/oprocd stop
    2. Ensure that Clusterware stack is down and not running
    #ps -ef |egrep "crsd.bin|ocssd.bin|evmd.bin|oprocd"
    This should return no processes.
    3. From one node of the cluster, change the value of the "diagwait" parameter to 13 by issuing the command as root:
    #crsctl set css diagwait 13 -force
    4. Check if diagwait is successfully set.
    #crsctl get css diagwait
    5. Restart the Oracle Clusterware on all the nodes by executing:
    #crsctl start crs
    (Note- If facing any problem to restarting the CRS services, ASM and Database, You can reboot the Nodes.The Cluster and Database will come automatically due to init startup scripts.)
    6. The oprocd daemon process will show with -m 10000
    # ps -efl| grep oprocd
    # 4 S root 6440 6063 0 -40 - - 2114 - Feb02 ? 00:00:00 /opt/oracle/product/10.2.0/crs/bin/oprocd.bin run -t 1000 -m 10000 -hsi 5:10:50:75:90 -f
    Rollback Procedure-
    If You need to unset oprocd value due any reason
    #crsctl unset css diagwait
    I am confident, The abnormal RAC Node restart problem will solve with this workaround.
    Regards,
    Sumit
    Bangalore,India

  • Rac node restart

    Hello everyone,
    I have met an error,that is our RAC node auto restart with below messages.
    #/u01/app/oracle/diag/rdbms/odsdb/odsdb1/trace/alert_odsdb1.log
    Fri Jun 07 12:23:42 2013
    Thread 1 cannot allocate new log, sequence 58363
    Checkpoint not complete
    Current log# 2 seq# 58362 mem# 0: +DATA/odsdb/onlinelog/group_2.265.812288839
    Current log# 2 seq# 58362 mem# 1: +DATA/odsdb/onlinelog/group_2.266.812288839
    Fri Jun 07 12:23:42 2013
    NOTE: ASMB terminating
    Errors in file /u01/app/oracle/diag/rdbms/odsdb/odsdb1/trace/odsdb1_asmb_32641.trc:
    ORA-15064: ? ASM ??????
    ORA-03113: ?????????
    ?? ID:
    ?? ID: 2047 ???: 5
    Errors in file /u01/app/oracle/diag/rdbms/odsdb/odsdb1/trace/odsdb1_asmb_32641.trc:
    ORA-15064: ? ASM ??????
    ORA-03113: ?????????
    ?? ID:
    ?? ID: 2047 ???: 5
    ASMB (ospid: 32641): terminating the instance due to error 15064
    Fri Jun 07 12:23:44 2013
    ORA-1092 : opitsk aborting process
    Fri Jun 07 12:23:46 2013
    ORA-1092 : opitsk aborting process
    Instance terminated by ASMB, pid = 32641
    Fri Jun 07 12:25:02 2013
    Starting ORACLE instance (normal)
    Fri Jun 07 12:25:23 2013
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Private Interface 'eth1:1' configured from GPnP for use as a private interconnect.
    [name='eth1:1', type=1, ip=169.254.37.103, mac=00-26-55-eb-61-89, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62]
    Public Interface 'eth0' configured from GPnP for use as a public interface.
    [name='eth0', type=1, ip=135.33.2.8, mac=00-26-55-eb-61-88, net=135.33.2.0/27, mask=255.255.255.224, use=public/1]
    Public Interface 'eth0:1' configured from GPnP for use as a public interface.
    [name='eth0:1', type=1, ip=135.33.2.13, mac=00-26-55-eb-61-88, net=135.33.2.0/27, mask=255.255.255.224, use=public/1]
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.2.0/dbhome_2/dbs/arch
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options.
    ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2
    System name:     Linux
    Node name:     odsdb1
    Release:     2.6.18-308.el5
    Version:     #1 SMP Fri Jan 27 17:17:51 EST 2012
    Machine:     x86_64
    Using parameter settings in server-side pfile /u01/app/oracle/product/11.2.0/dbhome_2/dbs/initodsdb1.ora
    System parameters with non-default values:
    processes = 4500
    sessions = 6784
    event = ""
    spfile = "+DATA/odsdb/spfileodsdb.ora"
    nls_language = "SIMPLIFIED CHINESE"
    nls_territory = "CHINA"
    memory_target = 170G
    control_files = "+DATA/odsdb/controlfile/current.262.812288837"
    control_files = "+DATA/odsdb/controlfile/current.261.812288837"
    db_block_size = 8192
    compatible = "11.2.0.0.0"
    db_files = 4096
    cluster_database = TRUE
    db_create_file_dest = "+DATA"
    db_recovery_file_dest = ""
    db_recovery_file_dest_size= 38820M
    thread = 1
    undo_tablespace = "UNDOTBS1"
    instance_number = 1
    remote_login_passwordfile= "EXCLUSIVE"
    db_domain = ""
    dispatchers = "(PROTOCOL=TCP) (SERVICE=odsdbXDB)"
    remote_listener = "odsdb-cluster-scan:1521"
    job_queue_processes = 1000
    audit_file_dest = "/u01/app/oracle/admin/odsdb/adump"
    audit_trail = "DB"
    db_name = "odsdb"
    open_cursors = 300
    diagnostic_dest = "/u01/app/oracle"
    Cluster communication is configured to use the following interface(s) for this instance
    169.254.37.103
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    Fri Jun 07 12:25:33 2013
    PMON started with pid=2, OS id=22959
    Fri Jun 07 12:25:33 2013
    PSP0 started with pid=3, OS id=22962
    Fri Jun 07 12:25:34 2013
    VKTM started with pid=4, OS id=22971 at elevated priority
    VKTM running at (1)millisec precision with DBRM quantum (100)ms
    Fri Jun 07 12:25:34 2013
    GEN0 started with pid=5, OS id=22977
    Fri Jun 07 12:25:34 2013
    DIAG started with pid=6, OS id=22979
    Fri Jun 07 12:25:35 2013
    DBRM started with pid=7, OS id=22981
    Fri Jun 07 12:25:35 2013
    PING started with pid=8, OS id=22983
    Fri Jun 07 12:25:35 2013
    ACMS started with pid=9, OS id=22985
    Fri Jun 07 12:25:35 2013
    DIA0 started with pid=10, OS id=22987
    Fri Jun 07 12:25:35 2013
    LMON started with pid=11, OS id=22989
    Fri Jun 07 12:25:35 2013
    LMD0 started with pid=12, OS id=22991
    * Load Monitor used for high load check
    * New Low - High Load Threshold Range = [61440 - 81920]
    Fri Jun 07 12:25:35 2013
    LMS0 started with pid=13, OS id=22994 at elevated priority
    Fri Jun 07 12:25:35 2013
    LMS1 started with pid=14, OS id=22998 at elevated priority
    Fri Jun 07 12:25:35 2013
    LMS2 started with pid=15, OS id=23002 at elevated priority
    Fri Jun 07 12:25:35 2013
    LMS3 started with pid=16, OS id=23006 at elevated priority
    Fri Jun 07 12:25:35 2013
    RMS0 started with pid=17, OS id=23010
    Fri Jun 07 12:25:35 2013
    LMHB started with pid=18, OS id=23013
    Fri Jun 07 12:25:35 2013
    MMAN started with pid=19, OS id=23015
    Fri Jun 07 12:25:35 2013
    DBW0 started with pid=20, OS id=23017
    Fri Jun 07 12:25:35 2013
    DBW1 started with pid=21, OS id=23019
    Fri Jun 07 12:25:35 2013
    DBW2 started with pid=22, OS id=23022
    Fri Jun 07 12:25:35 2013
    DBW3 started with pid=23, OS id=23024
    Fri Jun 07 12:25:35 2013
    DBW4 started with pid=24, OS id=23026
    Fri Jun 07 12:25:35 2013
    DBW5 started with pid=25, OS id=23028
    Fri Jun 07 12:25:35 2013
    DBW6 started with pid=26, OS id=23031
    Fri Jun 07 12:25:35 2013
    DBW7 started with pid=27, OS id=23033
    Fri Jun 07 12:25:35 2013
    LGWR started with pid=28, OS id=23035
    Fri Jun 07 12:25:35 2013
    CKPT started with pid=29, OS id=23037
    Fri Jun 07 12:25:35 2013
    SMON started with pid=30, OS id=23039
    Fri Jun 07 12:25:35 2013
    RECO started with pid=31, OS id=23041
    Fri Jun 07 12:25:35 2013
    RBAL started with pid=32, OS id=23043
    Fri Jun 07 12:25:35 2013
    ASMB started with pid=33, OS id=23045
    Fri Jun 07 12:25:35 2013
    MMON started with pid=34, OS id=23048
    Fri Jun 07 12:25:35 2013
    MMNL started with pid=35, OS id=23052
    Fri Jun 07 12:25:35 2013
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    NOTE: initiating MARK startup
    starting up 1 shared server(s) ...
    Starting background process MARK
    Fri Jun 07 12:25:35 2013
    MARK started with pid=37, OS id=23056
    NOTE: MARK has subscribed
    lmon registered with NM - instance number 1 (internal mem no 0)
    Reconfiguration started (old inc 0, new inc 119)
    List of instances:
    1 2 (myinst: 1)
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    * domain 0 valid according to instance 2
    * domain 0 valid = 1 according to instance 2
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    LMS 3: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    LMS 2: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    Reconfiguration started (old inc 119, new inc 121)
    List of instances:
    1 2 (myinst: 1)
    Nested reconfiguration detected.
    Global Resource Directory frozen
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    LMS 3: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    LMS 2: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Fri Jun 07 12:25:45 2013
    Submitted all GCS remote-cache requests
    Fri Jun 07 12:26:08 2013
    Fix write in gcs resources
    Reconfiguration complete
    Fri Jun 07 12:26:10 2013
    LCK0 started with pid=40, OS id=23632
    Fri Jun 07 12:26:10 2013
    Starting background process RSMN
    Fri Jun 07 12:26:10 2013
    RSMN started with pid=41, OS id=23646
    ORACLE_BASE not set in environment. It is recommended
    that ORACLE_BASE be set in the environment
    Reusing ORACLE_BASE from an earlier startup = /u01/app/oracle
    Fri Jun 07 12:26:11 2013
    ALTER SYSTEM SET local_listener=' (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=135.33.2.13)(PORT=1521))))' SCOPE=MEMORY SID='odsdb1';
    ALTER DATABASE MOUNT /* db agent *//* {1:9971:2} */
    Fri Jun 07 12:26:11 2013
    NOTE: Loaded library: System
    Fri Jun 07 12:26:11 2013
    SUCCESS: diskgroup DATA was mounted
    Fri Jun 07 12:26:11 2013
    NOTE: dependency between database odsdb and diskgroup resource ora.DATA.dg is established
    Fri Jun 07 12:26:16 2013
    Successful mount of redo thread 1, with mount id 3452000551
    Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
    Lost write protection disabled
    Completed: ALTER DATABASE MOUNT /* db agent *//* {1:9971:2} */
    ALTER DATABASE OPEN /* db agent *//* {1:9971:2} */
    Picked broadcast on commit scheme to generate SCNs
    Thread 1 advanced to log sequence 58364 (thread open)
    Thread 1 opened at log sequence 58364
    Current log# 2 seq# 58364 mem# 0: +DATA/odsdb/onlinelog/group_2.265.812288839
    Current log# 2 seq# 58364 mem# 1: +DATA/odsdb/onlinelog/group_2.266.812288839
    Successful open of redo thread 1
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Fri Jun 07 12:26:21 2013
    SMON: enabling cache recovery
    Fri Jun 07 12:26:23 2013
    minact-scn: Inst 1 is a slave inc#:121 mmon proc-id:23048 status:0x2
    minact-scn status: grec-scn:0x0000.00000000 gmin-scn:0x0000.00000000 gcalc-scn:0x0000.00000000
    Fri Jun 07 12:26:34 2013
    [23651] Successfully onlined Undo Tablespace 2.
    Undo initialization finished serial:0 start:2061372614 end:2061384964 diff:12350 (123 seconds)
    Verifying file header compatibility for 11g tablespace encryption..
    Verifying 11g file header compatibility for tablespace encryption completed
    Fri Jun 07 12:26:34 2013
    SMON: enabling tx recovery
    Database Characterset is ZHS16GBK
    No Resource Manager plan active
    Starting background process GTX0
    Fri Jun 07 12:26:35 2013
    GTX0 started with pid=45, OS id=23931
    Starting background process RCBG
    Fri Jun 07 12:26:35 2013
    RCBG started with pid=46, OS id=23933
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    Fri Jun 07 12:26:35 2013
    QMNC started with pid=48, OS id=23940
    Completed: ALTER DATABASE OPEN /* db agent *//* {1:9971:2} */
    Fri Jun 07 12:26:38 2013
    Starting background process CJQ0
    Fri Jun 07 12:26:38 2013
    CJQ0 started with pid=55, OS id=23977
    Fri Jun 07 12:27:56 2013
    Thread 1 advanced to log sequence 58365 (LGWR switch)
    Current log# 1 seq# 58365 mem# 0: +DATA/odsdb/onlinelog/group_1.263.812288839
    Current log# 1 seq# 58365 mem# 1: +DATA/odsdb/onlinelog/group_1.264.812288839
    Fri Jun 07 12:28:18 2013
    Starting background process SMCO
    Fri Jun 07 12:28:18 2013
    SMCO started with pid=70, OS id=25166
    Fri Jun 07 12:29:01 2013
    Thread 1 cannot allocate new log, sequence 58366
    Trace file /u01/app/oracle/diag/rdbms/odsdb/odsdb1/trace/odsdb1_asmb_32641.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options
    ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2
    System name: Linux
    Node name: odsdb1
    Release: 2.6.18-308.el5
    Version: #1 SMP Fri Jan 27 17:17:51 EST 2012
    Machine: x86_64
    Instance name: odsdb1
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 33
    Unix process pid: 32641, image: oracle@odsdb1 (ASMB)
    *** 2013-05-14 15:37:08.705
    *** SESSION ID:(3499.1) 2013-05-14 15:37:08.705
    *** CLIENT ID:() 2013-05-14 15:37:08.705
    *** SERVICE NAME:() 2013-05-14 15:37:08.705
    *** MODULE NAME:() 2013-05-14 15:37:08.705
    *** ACTION NAME:() 2013-05-14 15:37:08.705
    NOTE: initiating MARK startup
    *** 2013-05-14 15:37:16.835
    instance health monitoring reports instance shutting down
    *** 2013-06-07 12:23:42.700
    NOTE: ASMB terminating
    ORA-15064: ? ASM ??????
    ORA-03113: ?????????
    ?? ID:
    ?? ID: 2047 ???: 5
    error 15064 detected in background process
    ORA-15064: ? ASM ??????
    ORA-03113: ?????????
    ?? ID:
    ?? ID: 2047 ???: 5
    kjzduptcctx: Notifying DIAG for crash event
    ----- Abridged Call Stack Trace -----
    ksedsts()+461<-kjzdssdmp()+267<-kjzduptcctx()+232<-kjzdicrshnfy()+53<-ksuitm()+1332<-ksbrdp()+3344<-opirip()+623<-opidrv()+603<-sou2o()+103<-opimai_real()+266<-ssthrdmain()+252<-main()+201<-__libc_start_main()+244<-_start()+36
    ----- End of Abridged Call Stack Trace -----
    *** 2013-06-07 12:23:42.783
    ASMB (ospid: 32641): terminating the instance due to error 15064
    /u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log
    NOTE: ASMB process exiting, either shutdown is in progress
    NOTE: or foreground connected to ASMB was killed.
    Fri Jun 07 12:23:42 2013
    NOTE: client exited [14808]
    Fri Jun 07 12:23:44 2013
    Received an instance abort message from instance 2
    Please check instance 2 alert and LMON trace files for detail.
    Fri Jun 07 12:23:44 2013
    Received an instance abort message from instance 2
    Please check instance 2 alert and LMON trace files for detail.
    LMD0 (ospid: 31201): terminating the instance due to error 481
    Instance terminated by LMD0, pid = 31201
    Fri Jun 07 12:24:30 2013
    * instance_number obtained from CSS = 1, checking for the existence of node 0...
    * node 0 does not exist. instance_number = 1
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Private Interface 'eth1:1' configured from GPnP for use as a private interconnect.
    [name='eth1:1', type=1, ip=169.254.37.103, mac=00-26-55-eb-61-89, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62]
    Public Interface 'eth0' configured from GPnP for use as a public interface.
    [name='eth0', type=1, ip=135.33.2.8, mac=00-26-55-eb-61-88, net=135.33.2.0/27, mask=255.255.255.224, use=public/1]
    Picked latch-free SCN scheme 3
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/11.2.0.2/grid/dbs/arch
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    [grid@odsdb1 cssd]$ file core.30481
    core.30481: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV), SVR4-style, from 'ocssd.bin'
    [grid@odsdb1 cssd]$ gdb
    gdb gdbserver gdbtui
    [grid@odsdb1 cssd]$ gdb ocssd.bin core.30481
    GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-42.el5)
    Copyright (C) 2009 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law. Type "show copying"
    and "show warranty" for details.
    This GDB was configured as "x86_64-redhat-linux-gnu".
    For bug reporting instructions, please see:
    <http://www.gnu.org/software/gdb/bugs/>...
    Reading symbols from /u01/app/11.2.0.2/grid/bin/ocssd.bin...(no debugging symbols found)...done.
    [New Thread 30486]
    [New Thread 30530]
    [New Thread 30526]
    [New Thread 30525]
    [New Thread 30523]
    [New Thread 30522]
    [New Thread 30521]
    [New Thread 30520]
    [New Thread 30519]
    [New Thread 30504]
    [New Thread 30503]
    [New Thread 30495]
    [New Thread 30485]
    [New Thread 30484]
    [New Thread 30483]
    [New Thread 30481]
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libhasgen11.so...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libhasgen11.so
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libocr11.so...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libocr11.so
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libocrb11.so...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libocrb11.so
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libocrutl11.so...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libocrutl11.so
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libclntsh.so.11.1...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libclntsh.so.11.1
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libskgxn2.so...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libskgxn2.so
    Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done.
    Loaded symbols for /lib64/libdl.so.2
    Reading symbols from /lib64/libm.so.6...(no debugging symbols found)...done.
    Loaded symbols for /lib64/libm.so.6
    Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done.
    [Thread debugging using libthread_db enabled]
    Loaded symbols for /lib64/libpthread.so.0
    Reading symbols from /lib64/libnsl.so.1...(no debugging symbols found)...done.
    Loaded symbols for /lib64/libnsl.so.1
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libasmclntsh11.so...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libasmclntsh11.so
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libcell11.so...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libcell11.so
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libskgxp11.so...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libskgxp11.so
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libnnz11.so...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libnnz11.so
    Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
    Loaded symbols for /lib64/libc.so.6
    Reading symbols from /usr/lib64/libaio.so.1...(no debugging symbols found)...done.
    Loaded symbols for /usr/lib64/libaio.so.1
    Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
    Loaded symbols for /lib64/ld-linux-x86-64.so.2
    Reading symbols from /u01/app/11.2.0.2/grid/lib/libnque11.so...(no debugging symbols found)...done.
    Loaded symbols for /u01/app/11.2.0.2/grid/lib/libnque11.so
    Reading symbols from /opt/oracle/extapi/64/asm/orcl/1/libasm.so...(no debugging symbols found)...done.
    Loaded symbols for /opt/oracle/extapi/64/asm/orcl/1/libasm.so
    warning: no loadable sections found in added symbol-file system-supplied DSO at 0x7fff505fd000
    Core was generated by `/u01/app/11.2.0.2/grid/bin/ocssd.bin '.
    Program terminated with signal 6, Aborted.
    #0 0x000000369ea30265 in raise () from /lib64/libc.so.6
    (gdb) where
    #0 0x000000369ea30265 in raise () from /lib64/libc.so.6
    #1 0x000000369ea31d10 in abort () from /lib64/libc.so.6
    #2 0x00002afc67f9aeda in scls_abort (flags=0) at scls.c:7088
    #3 0x000000000040babd in clssscExit (thrd=0x10d325a0, status=clssscreasonSHUTNORM) at clsssc.c:2155
    #4 0x0000000000446221 in clssgmClientShutdown (thrd=0x10d325a0, cmInfo=0x10b40090) at clssgmc.c:6415
    #5 0x0000000000436707 in clssgmProcClientReqs (thrd=0x10d325a0, clctx=0x10b40630) at clssgmc.c:704
    #6 0x0000000000436405 in clssgmclientlsnr (thrd=0x10d325a0) at clssgmc.c:644
    #7 0x000000000040ac2f in clssscthrdmain (thrd=0x10d325a0) at clsssc.c:1716
    #8 0x000000369fa0677d in start_thread () from /lib64/libpthread.so.0
    #9 0x000000369ead49ad in clone () from /lib64/libc.so.6
    (gdb)
    2013-06-07 12:19:37.377: [    CSSD][1085888832]clssscSelect: cookie accept request 0x10b40630
    2013-06-07 12:19:37.377: [    CSSD][1085888832]clssgmAllocProc: (0x2aaab0133ea0) allocated
    2013-06-07 12:19:37.379: [    CSSD][1085888832]clssgmClientConnectMsg: properties of cmProc 0x2aaab0133ea0 - 1,2,3,4,5
    2013-06-07 12:19:37.379: [    CSSD][1085888832]clssgmClientConnectMsg: Connect from con(0x6ae44fa) proc(0x2aaab0133ea0) pid(14139/14139) version 11:2:1:4, properties: 1,2,3,4,5
    2013-06-07 12:19:37.379: [    CSSD][1085888832]clssgmClientConnectMsg: msg flags 0x0000
    2013-06-07 12:19:37.384: [    CSSD][1085888832]clssscSelect: cookie accept request 0x2aaab0133ea0
    2013-06-07 12:19:37.384: [    CSSD][1085888832]clssscevtypSHRCON: getting client with cmproc 0x2aaab0133ea0
    2013-06-07 12:19:37.384: [    CSSD][1085888832]clssgmRegisterClient: proc(69/0x2aaab0133ea0), client(1/0x2aaab010c5c0)
    2013-06-07 12:19:37.385: [    CSSD][1085888832]clssgmRegisterShared: grp DBODSDB, mbr 0, type 1
    2013-06-07 12:19:37.385: [    CSSD][1085888832]clssgmQueueShare: (0x2aaab0085790) target global grock DBODSDB member 0 type 1 queued from client (0x2aaab010c5c0), global grock DBODSDB, refcount 23
    2013-06-07 12:19:37.385: [    CSSD][1085888832]clssgmRegisterShared: global grock DBODSDB member 0 share type 1, refcount 23
    2013-06-07 12:19:37.391: [    CSSD][1085888832]clssscSelect: cookie accept request 0x2aaab0133ea0
    2013-06-07 12:19:37.391: [    CSSD][1085888832]clssscevtypSHRCON: getting client with cmproc 0x2aaab0133ea0
    2013-06-07 12:19:37.391: [    CSSD][1085888832]clssgmRegisterClient: proc(69/0x2aaab0133ea0), client(2/0x2aaab0061f10)
    what is the problem
    Edited by: 徐振富 on 2013-6-7 下午6:38
    Edited by: 徐振富 on 2013-6-7 下午6:45

    is your ASM instance up?
    If not, trying bring up ASM instance up just by itself and see if it throws any error?
    Post status of crsctl status cluster -all

  • What is best use of 1400 gb SGA (2 rac nodes 768gb each)

    currently using 11.2.0.3.0 on unix sun sever with 2 RAC nodes each 8 UltraSPARC-T1 cpus (came out in 2005) four threads each so oracle sees 32 CPUS very slow(1.2 gb).  Database is 4TB in size on regular SAN (10k speed).
    8gb SGA.
    New boss wants to update system to the max to get best performance possible  Money is a concern of course but budget is pretty high,  Our use case is 12-16 users at same time, running reports some small others very large (return single row or 10000s or rows).  reports take 5 sec to 5 minutes, Our job is get the fastest system possible,  We have total of 8 licenses available so we can have 16 cores.  We are also getting a 6tb all flash SSD array for database.  we can get any CPU we want but we cant use parallel query server due to all kinds of issues we have experienced (too many slaves, RAC interconnect saturation etc, whack-a-mole).  sparc has too many threads and without PS oracle runs query in single thread. 
    we have speced out the following system for each RAC node
    HP ProLiant DL380p Gen8 8 SFF server
    2 Intel Xeon E5-2637v2 3.5GHz/4-core cpus
    768 gb ram
    2 HP 300GB 6G SAS 15K drives for database software
    this will give us total of 4 Xeon E5-2637v2 cpus 16 cores total (,5 factor for 8 licenses) and 1536 ram (leaving ~1400 for sga).  this will guarantee an available core for each user.  we intend to create very very large keep pool around 300 gb for each node that will hold all our dimension tables.  this we hope will reduce reads from the SSD to just data from fact tables.,
    Are we doing a massive overkill here?  the budget for this was way less than what our boss expected.  will that big an sga be wasted will say a 256gb be fine.  or will oracle take advantage of it and be able to keep most blocks in there.
    will an sga that big cause oracle problems due to overhead of handling that much ram?

    Current System:
    ===========
    a. Version : 11.2.0.3
    b. Unix Sun
    c. CPU - 8 cpus with 4 threads => 32 logical cpus or cores
    d. database 4TB
    e. SAN - 10k speed disk drives
    f. 8gb SGA
    g. 1.2 gb ??
    h. Users --> 12-16 concurrent and run reports varying size
    i. reports elasped time 5 sec to 5 mins
    j. cpu license -->8
    Target System
    ===========
    a. Version: 11.2.0.3
    b. HP ProLiant DL380p Gen8 8 SFF server
    c. RAM --> 768 GB
    d. 2 HP 300GB 6G SAS 15K drives for database software
    e. large keep pool -->90 gb to  hold all dimension tables. 
    f.  SSD to just data from fact tables
    g. SGA -->256gb
    Reassessment of the performance issues of current system appears to be required.Good performance tuning expert is required to look into tuning issues of current application by analyzing awr performance metrics . If 8GB SGA is not enough,then reason behind so is that queries running in the system are not having good access path to select lesser data to avoid flushing out of recent buffers from different tables involved in the query. Until those issues are identified , wherever you go, performance issue wont be going away as table size increase in future , problem will reappear.Even if the queries are running with more FULL Scan , then re-platforming to Exadata might be right decision as Exadata has smart scan , cell offloading feature which works faster and might be right direction for best performance and best investment for future.Compression (compress for OLTP) could be one of the other feature to exploit to improve further efficiency while reading the lesser block in lesser read time.
    Investment in infrastructure will solve a few issue in short term but long term issue will again arise.
    Investment in identifying the performance issues of current system would be best investment in current scenario.

  • RAC node Hung

    Hi Friends,
    Server info:
    Windows 2003 server
    Oracle 10.2.0.5, 2 Node RAC
    We are having problem Hung Node 2 server due to Blue dump error. But in Oracle we are not getting any error on CRS & alertlogs. After restarted the server problem solved. How can we identify what could be the reason of server hang. We are not getting any error in Operating System side also. Is there any way to identify the problem of server hang after restarted server?
    Thanks in advance.

    user12159566 wrote:
    Hi,
    Thanks for your reply.
    OS side also having no logs generated except "*Blue Screen Trap (BugCheck, STOP: 0x0000FFFF (0x0000000000000000, 0x0000000000000000, 0x0000000000000000, 0x0000000000000000))*" . As per my knowledge this is not a Node eviction problem. We are not able to find any node eviction log in Oracle logs.
    See this note:
    *RAC on Windows: Oracle Clusterware Node Evictions a.k.a. Why do we get a Blue Screen (BSOD) Caused By Orafencedrv.sys? [ID 337784.1]*

  • RAC node process using 25% physical memory

    We have a QA server is non-RAC, and production is two-node RAC. We have a migration app that does an INSERT from SELECT over 2 instances. All of the machines have been in successful day-to-day use for several months...our only troublespot seems to be the migration app.
    Today we started the app on the QA server and watched the oracle processes using top. They ran normally and finished without any problems.
    The same app started on either of the RAC nodes produced process memory errors and died.
    As the app ran, there was a process reading the data from the source instance and a process writing to the target instance. We confirmed this by querying the session data. It doesn't matter which of the nodes runs which target process...the result is the same.
    The reading process(session) on the source instance seems to run normally. The write process on the target instance, however, begins slowly accumulating memory in about 16M chunks and holds on to them.
    We saw this in the RES and in the MEM columns of top. The target process never released any memory, but slowly grabbed it until its
    RES was 4GB and the %MEM was about 30%. The app then died with process memory error. This is reproducible over several runs.
    ( Per Metalink Note 567506.1, the recommended value for Linux 64-bit is 4294967295 ..we have that set. )
    There are other oracle processes and instances running on both nodes which do not seem to be affected. The total number of processes on each machine is around 750..much lower than the nprocs ulimit of 63K.
    These process are both oracle sessions spawned by the app.
    I haven't seen any info on the web or Metalink that matches these symptoms, so I thought I'd try the experts.
    Why would the write session continuously use up physical memory, but only on RAC nodes?
    We are running RHEL5 on Dell Poweredge 2950 w 16K Physical mem. Version of 10g is R2.0.4.

    user12017889 wrote:
    The write process on the target instance, however, begins slowly accumulating memory in about 16M chunks and holds on to them.Exactly what process is this? An Oracle server process? Dedicated or shared server?
    We saw this in the RES and in the MEM columns of top. The target process never released any memory, but slowly grabbed it until its
    RES was 4GB and the %MEM was about 30%. How does the writer process work? Does it use PL/SQL? Does it use bulk processing? How does it call the reader process? Or does the reader process call it? Is this over a database link.
    The app then died with process memory error. This is reproducible over several runs.If this is an Oracle server process, then there should be an entry in the alert log of the instance that recorded the crash and includes the name of the trace file generated by the crash.

  • RAC node connected to outside DB  and pass 2 IP address

    Experts,
    we have a 4 nodes 11.1 RAC at red hat
    As we know each node have 3 IP. --public, vip and privated IP.
    it works well in domain inside network.
    But we get a problem when try to connect to outside network client's database.
    the connection string pass 2 IPs to client firewall (based on network monitor).
    listener log show that connection is OK. But Conection is still blocked by client's firewall side.
    The client network staff told us that we passed two IP address during connected connection.
    Could some experts explain why does the RAC node's connected requested passs two IP to client database?
    It is only discovered by network staff. we could not see 2 IP information in listener log file.
    Is it our firewall NAT setting issue? or client firewall NAT setting issue
    Thanks
    Jim
    Edited by: user589812 on Jan 21, 2010 2:25 PM

    Hi Experts
    The Two IP addresses that were being passed were one of the load balancer and one of the db server. the load balancer was supposed to mask the load balancer IP address and only pass the db IP address. Somehow, we were sending both IP to client database--outside network. But IT works well in inter network side. How to eliminate the load balancer IP address from coming to client network firewall --to client database server side?
    I looking for help!
    JIm

  • RAC node rebooting frequently

    Hi all,
    I am woserking on two node rac environment.One of my rac node is rebooting so frequently.I am using oracle 10g database and clusterware also(10.2.0.1).
    Ihave checked os logs(linux AS 4),and rac related logs.Not able to find out anything.Posting all logs please suggest.

    Hi i am posting alert log,os log and ocssd logs....
    clusterware alert log....._
    [crsd(5649)]CRS-1201:CRSD started on node ctmisdb1.
    2012-03-21 09:50:38.188
    [cssd(7490)]CRS-1601:CSSD Reconfiguration complete. Active nodes are ctmisdb1 .
    2012-03-21 09:50:46.726
    [crsd(5649)]CRS-1204:Recovering CRS resources for node ctmisdb2.
    2012-03-21 09:55:21.760
    [cssd(7490)]CRS-1601:CSSD Reconfiguration complete. Active nodes are ctmisdb1 ctmisdb2 .
    2012-03-21 12:07:46.681
    [cssd(7426)]CRS-1605:CSSD voting file is online: /dev/raw/raw2. Details in /u01/app/oracle/product/crs/log/ctmisdb1/cssd/ocssd.log.
    2012-03-21 12:07:50.432
    [cssd(7426)]CRS-1601:CSSD Reconfiguration complete. Active nodes are ctmisdb1 ctmisdb2 .
    2012-03-21 12:07:50.893
    [crsd(5549)]CRS-1012:The OCR service started on node ctmisdb1.
    2012-03-21 12:07:50.942
    [evmd(7304)]CRS-1401:EVMD started on node ctmisdb1.
    2012-03-21 12:07:52.827
    [crsd(5549)]CRS-1201:CRSD started on node ctmisdb1.
    2012-03-21 12:48:41.908
    [cssd(7448)]CRS-1605:CSSD voting file is online: /dev/raw/raw2. Details in /u01/app/oracle/product/crs/log/ctmisdb1/cssd/ocssd.log.
    2012-03-21 12:48:45.741
    [cssd(7448)]CRS-1601:CSSD Reconfiguration complete. Active nodes are ctmisdb1 ctmisdb2 .
    2012-03-21 12:48:49.173
    [crsd(5546)]CRS-1012:The OCR service started on node ctmisdb1.
    2012-03-21 12:48:49.190
    [evmd(7328)]CRS-1401:EVMD started on node ctmisdb1.
    2012-03-21 12:48:50.818
    [crsd(5546)]CRS-1201:CRSD started on node ctmisdb1.
    2012-03-21 13:26:36.398
    [cssd(7343)]CRS-1605:CSSD voting file is online: /dev/raw/raw2. Details in /u01/app/oracle/product/crs/log/ctmisdb1/cssd/ocssd.log.
    2012-03-21 13:26:40.492
    [cssd(7343)]CRS-1601:CSSD Reconfiguration complete. Active nodes are ctmisdb1 ctmisdb2 .
    2012-03-21 13:26:40.939
    [crsd(5542)]CRS-1012:The OCR service started on node ctmisdb1.
    2012-03-21 13:26:40.977
    [evmd(7223)]CRS-1401:EVMD started on node ctmisdb1.
    2012-03-21 13:26:42.772
    [crsd(5542)]CRS-1201:CRSD started on node ctmisdb1.
    node os log....+
    Mar 21 12:06:35 ctmisdb1 rc: Starting readahead: succeeded
    Mar 21 12:06:35 ctmisdb1 messagebus: messagebus startup succeeded
    Mar 21 12:06:36 ctmisdb1 cups-config-daemon: cups-config-daemon startup succeeded
    Mar 21 12:06:36 ctmisdb1 haldaemon: haldaemon startup succeeded
    Mar 21 12:06:37 ctmisdb1 fstab-sync[6267]: removed all generated mount points
    Mar 21 12:06:37 ctmisdb1 fstab-sync[6378]: added mount point /media/cdrecorder for /dev/hde
    Mar 21 12:06:37 ctmisdb1 su(pam_unix)[6323]: session opened for user oracle by (uid=0)
    Mar 21 12:06:37 ctmisdb1 su(pam_unix)[6324]: session opened for user oracle by (uid=0)
    Mar 21 12:06:37 ctmisdb1 su(pam_unix)[6229]: session opened for user oracle by (uid=0)
    Mar 21 12:06:37 ctmisdb1 su(pam_unix)[6229]: session closed for user oracle
    Mar 21 12:06:37 ctmisdb1 su(pam_unix)[6644]: session opened for user oracle by (uid=0)
    Mar 21 12:06:37 ctmisdb1 kernel: matroxfb: cannot set xres to 800, rounded up to 832
    Mar 21 12:06:37 ctmisdb1 last message repeated 2 times
    Mar 21 12:06:41 ctmisdb1 su(pam_unix)[6323]: session closed for user oracle
    Mar 21 12:06:41 ctmisdb1 su(pam_unix)[6644]: session closed for user oracle
    Mar 21 12:06:41 ctmisdb1 su(pam_unix)[6324]: session closed for user oracle
    Mar 21 12:06:41 ctmisdb1 logger: Cluster Ready Services completed waiting on dependencies.
    Mar 21 12:06:41 ctmisdb1 last message repeated 2 times
    Mar 21 12:06:45 ctmisdb1 gdm(pam_unix)[6379]: session opened for user root by (uid=0)
    Mar 21 12:06:46 ctmisdb1 gconfd (root-7052): starting (version 2.8.1), pid 7052 user 'root'
    Mar 21 12:06:47 ctmisdb1 gconfd (root-7052): Resolved address "xml:readonly:/etc/gconf/gconf.xml.mandatory" to a read-only configuration source at position 0
    Mar 21 12:06:47 ctmisdb1 gconfd (root-7052): Resolved address "xml:readwrite:/root/.gconf" to a writable configuration source at position 1
    Mar 21 12:06:47 ctmisdb1 gconfd (root-7052): Resolved address "xml:readonly:/etc/gconf/gconf.xml.defaults" to a read-only configuration source at position 2
    Mar 21 12:06:55 ctmisdb1 gconfd (root-7052): Resolved address "xml:readwrite:/root/.gconf" to a writable configuration source at position 0
    Mar 21 12:07:41 ctmisdb1 su(pam_unix)[5547]: session opened for user oracle by (uid=0)
    Mar 21 12:07:41 ctmisdb1 logger: Running CRSD with TZ =
    Mar 21 12:07:43 ctmisdb1 su(pam_unix)[7399]: session opened for user oracle by (uid=0)
    Mar 21 12:12:49 ctmisdb1 sshd(pam_unix)[15323]: session opened for user root by root(uid=0)
    Mar 21 12:12:57 ctmisdb1 su(pam_unix)[15531]: session opened for user oracle by root(uid=0)
    Mar 21 12:47:05 ctmisdb1 syslogd 1.4.1: restart.
    ocssd log....
    [    CSSD]2012-03-21 11:24:41.045 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800661f0c0) proc(0x8006622560) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 11:24:41.078 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660cfe0) proc(0x800662ba70) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:07:44.564 >USER: Oracle Database 10g CSS Release 10.2.0.1.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
    [  clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=ctmisdb1DBG_CSSD))
    [    CSSD]2012-03-21 12:07:44.564 >USER: CSS daemon log for node ctmisdb1, number 1, in cluster crs
    [    CSSD]2012-03-21 12:07:44.581 [28260544] >TRACE: clssscmain: local-only set to false
    [    CSSD]2012-03-21 12:07:44.603 [28260544] >TRACE: clssnmReadNodeInfo: added node 1 (ctmisdb1) to cluster
    [    CSSD]2012-03-21 12:07:44.621 [28260544] >TRACE: clssnmReadNodeInfo: added node 2 (ctmisdb2) to cluster
    [    CSSD]2012-03-21 12:07:44.627 [72925824] >TRACE: clssnm_skgxnmon: skgxn init failed, rc 1
    [    CSSD]2012-03-21 12:07:44.627 [28260544] >TRACE: clssnm_skgxnonline: Using vacuous skgxn monitor
    [    CSSD]2012-03-21 12:07:44.641 [28260544] >TRACE: clssnmInitNMInfo: misscount set to 60
    [    CSSD]2012-03-21 12:07:44.655 [28260544] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/raw/raw2)
    [    CSSD]2012-03-21 12:07:46.661 [72925824] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/raw/raw2)
    [    CSSD]2012-03-21 12:07:46.690 [72925824] >TRACE: clssnmReadDskHeartbeat: node(2) is down. rcfg(18) wrtcnt(7920) LATS(0) Disk lastSeqNo(7920)
    [    CSSD]2012-03-21 12:07:46.752 [28260544] >TRACE: clssnmFatalInit: fatal mode enabled
    [    CSSD]2012-03-21 12:07:46.752 [94777984] >TRACE: clssnmconnect: connecting to node 1, flags 0x0001, connector 1
    [    CSSD]2012-03-21 12:07:46.753 [94777984] >TRACE: clssnmconnect: connecting to node 0, flags 0x0000, connector 1
    [    CSSD]2012-03-21 12:07:46.753 [94777984] >TRACE: clssnmClusterListener: Probing node(2)
    [    CSSD]2012-03-21 12:07:46.755 [94777984] >TRACE: clssnmConnComplete: connected to node 2 (con 0x8006601040), state 3 birth 0, unique 1332303918/1332303918 prevConuni(0)
    [    CSSD]2012-03-21 12:07:46.756 [106332800] >TRACE: clssgmclientlsnr: listening on (ADDRESS=(PROTOCOL=ipc)(KEY=Oracle_CSS_LclLstnr_crs_1))
    [    CSSD]2012-03-21 12:07:46.756 [106332800] >TRACE: clssgmclientlsnr: listening on (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_ctmisdb1_crs))
    [    CSSD]2012-03-21 12:07:46.757 [151810688] >TRACE: clssnmPollingThread: Connection complete
    [    CSSD]2012-03-21 12:07:46.757 [162296448] >TRACE: clssnmSendingThread: Connection complete
    [    CSSD]2012-03-21 12:07:46.757 [172782208] >TRACE: clssnmRcfgMgrThread: Connection complete
    [    CSSD]2012-03-21 12:07:46.757 [172782208] >TRACE: clssnmRcfgMgrThread: Local Join
    [    CSSD]2012-03-21 12:07:46.757 [172782208] >WARNING: clssnmLocalJoinEvent: takeover aborted due to connected but inactive nodes
    [    CSSD]2012-03-21 12:07:47.339 [94777984] >TRACE: clssnmHandleSync: Acknowledging sync: src[2] srcName[ctmisdb2] seq[5] sync[18]
    [    CSSD]2012-03-21 12:07:47.759 [172782208] >TRACE: clssnmRcfgMgrThread: lastleader(2) unique(1332311864)
    [    CSSD]2012-03-21 12:07:48.341 [94777984] >TRACE: clssnmSendVoteInfo: node(2) syncSeqNo(18)
    [    CSSD]2012-03-21 12:07:50.346 [94777984] >TRACE: clssnmUpdateNodeState: node 0, state (0/0) unique (0/0) prevConuni(0) birth (0/0) (old/new)
    [    CSSD]2012-03-21 12:07:50.346 [94777984] >TRACE: clssnmDeactivateNode: node 0 () left cluster
    [    CSSD]2012-03-21 12:07:50.346 [94777984] >TRACE: clssnmUpdateNodeState: node 1, state (1/2) unique (1332311864/1332311864) prevConuni(0) birth (0/18) (old/new)
    [    CSSD]2012-03-21 12:07:50.346 [94777984] >TRACE: clssnmUpdateNodeState: node 2, state (4/3) unique (1332303918/1332303918) prevConuni(0) birth (0/16) (old/new)
    [    CSSD]2012-03-21 12:07:50.346 [94777984] >USER: clssnmHandleUpdate: SYNC(18) from node(2) completed
    [    CSSD]2012-03-21 12:07:50.346 [94777984] >USER: clssnmHandleUpdate: NODE 1 (ctmisdb1) IS ACTIVE MEMBER OF CLUSTER
    [    CSSD]2012-03-21 12:07:50.346 [94777984] >USER: clssnmHandleUpdate: NODE 2 (ctmisdb2) IS ACTIVE MEMBER OF CLUSTER
    [    CSSD]2012-03-21 12:07:50.429 [28260544] >USER: NMEVENT_SUSPEND [00][00][00][00]
    [    CSSD]2012-03-21 12:07:50.429 [183267968] >TRACE: clssgmReconfigThread: started for reconfig (18)
    [    CSSD]2012-03-21 12:07:50.429 [183267968] >USER: NMEVENT_RECONFIG [00][00][00][06]
    [    CSSD]2012-03-21 12:07:50.429 [183267968] >TRACE: clssgmEstablishConnections: 2 nodes in cluster incarn 18
    [    CSSD]2012-03-21 12:07:50.430 [140255872] >TRACE: clssgmInitialRecv: (0x102a0360) accepted a new connection from node 2 born at 16 active (2, 2), vers (10,3,1,2)
    [    CSSD]2012-03-21 12:07:50.430 [140255872] >TRACE: clssgmInitialRecv: conns done (2/2)
    [    CSSD]2012-03-21 12:07:50.430 [183267968] >TRACE: clssgmEstablishMasterNode: MASTER for 18 is node(2) birth(16)
    [    CSSD]2012-03-21 12:07:50.430 [183267968] >TRACE: clssgmChangeMasterNode: requeued 0 RPCs
    [    CSSD]2012-03-21 12:07:50.432 [140255872] >TRACE: clssgmHandleDBDone(): src/dest (2/65535) size(72) incarn 18
    [    CSSD]CLSS-3000: reconfiguration successful, incarnation 18 with 2 nodes
    [    CSSD]CLSS-3001: local node number 1, master node number 2
    [    CSSD]2012-03-21 12:07:50.433 [183267968] >TRACE: clssgmReconfigThread: completed for reconfig(18), with status(1)
    [    CSSD]2012-03-21 12:07:50.550 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006603bb0) proc(0x8006608b00) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:07:50.551 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x80066066f0) proc(0x8006608d70) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:07:53.569 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660ec70) proc(0x8006611260) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:00.829 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006610990) proc(0x800660de00) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:04.698 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006613030) proc(0x8006612930) pid(8115) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:04.816 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006612950) proc(0x8006613c20) pid(8115) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:04.832 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006612950) proc(0x8006613c20) pid(8115) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:06.615 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006612950) proc(0x8006613c20) pid(8171) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:07.114 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006615960) proc(0x8006616350) pid(8175) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:11.373 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x80066192a0) proc(0x8006619470) pid(8302) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:11.669 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800661bf60) proc(0x800661ee20) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:17.135 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800661bf60) proc(0x800661ee70) pid(8458) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:17.268 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800661fc00) proc(0x80066220d0) pid(8460) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:17.305 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x80066223e0) proc(0x8006625250) pid(8462) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:17.353 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006625560) proc(0x8006628430) pid(8464) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:24.585 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006625560) proc(0x8006628430) pid(8645) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:27.957 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006628740) proc(0x800662b610) pid(8722) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:30.931 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800662cce0) proc(0x800662c860) pid(8801) proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:36.400 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800661c5f0) proc(0x800661eb50) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:37.863 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800662f1c0) proc(0x800661eee0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:38.537 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800662f1c0) proc(0x800661d500) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:39.232 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800661bf60) proc(0x800661d500) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:43.085 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x8006611210) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:08:58.971 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x80066112c0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:09:59.290 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x800660b190) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:10:59.589 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x800660b190) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:11:59.904 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x800660b190) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:13:00.203 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x800660b190) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:13:14.029 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x800660b190) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:14:00.501 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x8006611210) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:15:00.809 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x8006628670) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:16:01.117 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x8006628670) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:17:01.447 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x800662f0f0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:01.762 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x8006628670) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:39.841 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x8006628670) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:42.123 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x8006628670) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:42.316 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x8006628670) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:42.843 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x8006628670) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:42.963 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x8006628670) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:43.098 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b260) proc(0x800662bd20) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:44.173 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x8006628670) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:44.368 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b260) proc(0x800660b310) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:45.351 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x8006628670) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:46.236 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x800662f0f0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:47.031 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x800662f0f0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:47.694 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x800662f0f0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:47.819 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b260) proc(0x800660b310) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:48.103 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x800662f0f0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:48.327 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b260) proc(0x800660b310) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:48.484 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x8006611210) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:48.758 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x800662f0f0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:49.529 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x800662f0f0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:50.509 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x800662f0f0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:51.060 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x800660b830) proc(0x800662f0f0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:18:51.558 [106332800] >TRACE: clssgmClientConnectMsg: Connect from con(0x8006611630) proc(0x800662f0f0) pid() proto(10:2:1:1)
    [    CSSD]2012-03-21 12:48:39.836 >USER: Oracle Database 10g CSS Release 10.2.0.1.0 Production Copyright 1996, 2004 Oracle. All rights reserved.
    [  clsdmt]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=ctmisdb1DBG_CSSD))
    [    CSSD]2012-03-21 12:48:39.836 >USER: CSS daemon log for node ctmisdb1, number 1, in cluster crs
    [    CSSD]2012-03-21 12:48:39.849 [28260544] >TRACE: clssscmain: local-only set to false
    [    CSSD]2012-03-21 12:48:39.865 [28260544] >TRACE: clssnmReadNodeInfo: added node 1 (ctmisdb1) to cluster
    [    CSSD]2012-03-21 12:48:39.872 [28260544] >TRACE: clssnmReadNodeInfo: added node 2 (ctmisdb2) to cluster
    [    CSSD]2012-03-21 12:48:39.879 [72925824] >TRACE: clssnm_skgxnmon: skgxn init failed, rc 1
    [    CSSD]2012-03-21 12:48:39.879 [28260544] >TRACE: clssnm_skgxnonline: Using vacuous skgxn monitor
    [    CSSD]2012-03-21 12:48:39.881 [28260544] >TRACE: clssnmInitNMInfo: misscount set to 60
    [    CSSD]2012-03-21 12:48:39.888 [28260544] >TRACE: clssnmDiskStateChange: state from 1 to 2 disk (0//dev/raw/raw2)
    [    CSSD]2012-03-21 12:48:41.892 [72925824] >TRACE: clssnmDiskStateChange: state from 2 to 4 disk (0//dev/raw/raw2)
    [    CSSD]2012-03-21 12:48:41.915 [72925824] >TRACE: clssnmReadDskHeartbeat: node(2) is down. rcfg(20) wrtcnt(10367) LATS(0) Disk lastSeqNo(10367)
    [    CSSD]2012-03-21 12:48:41.959 [28260544] >TRACE: clssnmFatalInit: fatal mode enabled
    [    CSSD]2012-03-21 12:48:41.959 [94777984] >TRACE: clssnmconnect: connecting to node 1, flags 0x0001, connector 1
    [    CSSD]2012-03-21 12:48:41.959 [94777984] >TRACE: clssnmconnect: connecting to node 0, flags 0x0000, connector 1
    [    CSSD]2012-03-21 12:48:41.959 [94777984] >TRACE: clssnmClusterListener: Probing node(2)
    [    CSSD]2012-03-21 12:48:41.961 [94777984] >TRACE: clssnmConnComplete: connected to node 2 (con 0x8006702790), state 3 birth 0, unique 1332303918/1332303918 prevConuni(0)
    [    CSSD]2012-03-21 12:48:41.962 [106332800] >TRACE: clssgmclientlsnr: listening on (ADDRESS=(PROTOCOL=ipc)(KEY=Oracle_CSS_LclLstnr_crs_1))
    [    CSSD]2012-03-21 12:48:41.962 [106332800] >TRACE: clssgmclientlsnr: listening on (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_ctmisdb1_crs))
    [    CSSD]2012-03-21 12:48:41.963 [152330880] >TRACE: clssnmPollingThread: Connection complete
    [    CSSD]2012-03-21 12:48:41.963 [162816640] >TRACE: clssnmSendingThread: Connection complete
    [    CSSD]2012-03-21 12:48:41.963 [173302400] >TRACE: clssnmRcfgMgrThread: Connection complete
    [    CSSD]2012-03-21 12:48:41.963 [173302400] >TRACE: clssnmRcfgMgrThread: Local Join
    [    CSSD]2012-03-21 12:48:41.963 [173302400] >WARNING: clssnmLocalJoinEvent: takeover aborted due to connected but inactive nodes
    [    CSSD]2012-03-21 12:48:42.631 [94777984] >TRACE: clssnmHandleSync: Acknowledging sync: src[2] srcName[ctmisdb2] seq[13] sync[20]
    [    CSSD]2012-03-21 12:48:42.965 [173302400] >TRACE: clssnmRcfgMgrThread: lastleader(2) unique(1332314319)
    [    CSSD]2012-03-21 12:48:43.636 [94777984] >TRACE: clssnmSendVoteInfo: node(2) syncSeqNo(20)
    [    CSSD]2012-03-21 12:48:45.640 [94777984] >TRACE: clssnmUpdateNodeState: node 0, state (0/0) unique (0/0) prevConuni(0) birth (0/0) (old/new)
    [    CSSD]2012-03-21 12:48:45.640 [94777984] >TRACE: clssnmDeactivateNode: node 0 () left cluster
    [    CSSD]2012-03-21 12:48:45.640 [94777984] >TRACE: clssnmUpdateNodeState: node 1, state (1/2) unique (1332314319/1332314319) prevConuni(0) birth (0/20) (old/new)
    [    CSSD]2012-03-21 12:48:45.640 [94777984] >TRACE: clssnmUpdateNodeState: node 2, state (4/3) unique (1332303918/1332303918) prevConuni(0) birth (0/16) (old/new)
    [    CSSD]2012-03-21 12:48:45.640 [94777984] >USER: clssnmHandleUpdate: SYNC(20) from node(2) completed
    [    CSSD]2012-03-21 12:48:45.640 [94777984] >USER: clssnmHandleUpdate: NODE 1 (ctmisdb1) IS ACTIVE MEMBER OF CLUSTER
    [    CSSD]2012-03-21 12:48:45.640 [94777984] >USER: clssnmHandleUpdate: NODE 2 (ctmisdb2) IS ACTIVE MEMBER OF CLUSTER
    [    CSSD]2012-03-21 12:48:45.737 [28260544] >USER: NMEVENT_SUSPEND [00][00][00][00]
    [    CSSD]2012-03-21 12:48:45.738 [183788160] >TRACE: clssgmReconfigThread: started for reconfig (20)
    [    CSSD]2012-03-21 12:48:45.738 [183788160] >USER: NMEVENT_RECONFIG [00][00][00][06]
    [    CSSD]2012-03-21 12:48:45.738 [183788160] >TRACE: clssgmEstablishConnections: 2 nodes in cluster incarn 20
    [    CSSD]2012-03-21 12:48:45.739 [140776064] >TRACE: clssgmInitialRecv: (0x102a0370) accepted a new connection from node 2 born at 16 active (2, 2), vers (10,3,1,2)
    [    CSSD]2012-03-21 12:48:45.739 [140776064] >TRACE: clssgmInitialRecv: conns done (2/2)
    [    CSSD]2012-03-21 12:48:45.739 [183788160] >TRACE: clssgmEstablishMasterNode: MASTER for 20 is node(2) birth(16)
    [    CSSD]2012-03-21 12:48:45.739 [183788160] >TRACE: clssgmChangeMasterNode: requeued 0 RPCs
    [    CSSD]2012-03-21 12:48:45.741 [140776064] >TRACE: clssgmHandleDBDone(): src/dest (2/65535) size(72) incarn 20
    [    CSSD]CLSS-3000: reconfiguration successful, incarnation 20 with 2 nodes
    Plz check and help..........

  • RAC nodes rebooting

    I'm newbie, and trying to implement 11g RAC using openfiler on E-linux 5.3
    I have so far successfully configured openfiler, created volumes and configured the nodes, configured ocfs2 and ASM.
    When I rebooted the machines, I first started the openfiler server and external storage they start fine and all volumes(devices) comes up fine, but when I boot the nodes one after the other, they are rebooting after couple of minutes continuously one after other , I am clue less, how to figure out what is the problem, why is this happening, has any one else experienced similar situatio? , how can this be resolved?
    I would appreciate any advise or help
    Thanks

    what is difference in timings on your rac nodes...any thing > 45 secs can possibly cause reboots.
    check you disktimeouts.. and hangcheck timer settings
    hth

  • Solaris RAC nodes re-booting

    I have a pre-production 2-node cluster running on Solaris 10, Oracle 10.2.0.3 with the Oracle CRS, and using a NetApp filer as the shared storage.
    I also have a separate Solaris server running Grid Control 10.2.0.3, with the repository as one of the databases on the RAC (don't know if this is relevant to my problem).
    Periodically both RAC nodes reboot, with no trace of why (the GC server is fine). There is nothing logged in the Solaris logs (messages file), CRS logs, Oracle logs or the NetApp logs.
    All that is shown is the relevant service starting up following the shutdown.
    Has anyone any experience of this, or any thoughts on which component may cause such an issue?
    Thanks in advance
    Bob

    What type of Sun hardware are you using?
    Below is the Action Plan Oracle support sent me on my SR on this issue, not sure if any of this was provided to you or would be of help.
    ACTION PLAN
    ============
    1. there is nothing on the files at all that sheds any light on the issue
    agian 3 sperate sets of clusters all losing all nodes at the same tiem is a very strange occurance. Please be sure to have the admin look for
    anything in common wiht all custers.
    2. advice placing oswatcher on the systems Note.301137.1 Ext/Pub OS Watcher User Guide
    if we should have another occurances we will want the oswatcher logs for 1 hr before issue thru issue
    also see if the unix admin perhaps has any os stats from this occurance
    3. advice settign ntpd to run with -x option I do see that you are having negative time changes
    at times
    -x will give us a skew rather then an abbrupt time change
    4. advice setting this when you can
    Please do the following
    set the diagwait parameter:
    crsctl set css diagwait N [-force]
    Where N is the number of seconds to wait for a filesystem sync to
    complete (after this wait the node will reboot regardless of whether the
    sync has completed). This change must be made with the clusterware
    down, which will require the '-force', or with the stack up on just 1
    node, after which the stack on that node must be restarted before the
    stack starts up on any of the other nodes.
    N should be set to 25 (25 seconds)
    5. advice that you have with pcw mlr#6 Patch 5980915 on the systems as well
    but I do not believe that this was an oracle bug the reason for placing the patch on is for advanced diagnostics that is in that patchset
    6. the two issues sun is workking on
    Sun is working to resolve a time skew issue and a Solaris 10 kernel SIGALRM Sun#6292092 in addition to Sun#6595936.
    7. we do have a diagnostic oprocd that soem sites have used but on thier test systems. It stops reboots adn dumps information but I have
    been hesitant to place it on production boxes if you continue to have issues we may consider download the oprocd_skewfix_noreboot fro
    m Bug 6279879 but at this time I do not belvve that is warrented

  • If use MSSQ , when oracle rac node reboot, client get TPEOS error

    Hi, all
    in my tuxedo applicaton, if we use Single Server, Single Queue mode , when reboot any Oracle RAC node, our application is ok, client can get correct result. but if we use MSSQ(Multi Server, Single Queue) , if Oracle RAC node is ok , our application also is ok. but if we reboot any Oracle RAC node, client program can continue run, get correct result, but always get TPEOS error , for this situation, server can get client request, but client can not get server reply, only get TPEOS error.
    our enviroment is :
    oracle RAC ,10g 10.2.0.4 , two instances ,rac1 rac2, and two DTP services s1 and s2, set s1 and s2 services TAF is basic
    tuxedo 10R3 , two nodes ,work in MP model ,use XA access oracle rac database,services have Transaction and not Transaction
    OS is linux AS4 U5, 64bits
    service program use OCI
    can any one encounter this problem ?

    Hi, first thanks you
    in ULOG file , only have failover information, not any other error message, in client side also has no other error.
    not use MSSQ, ubb file about MSSQ config
    SERVERS
    DEFAULT:
    CLOPT="-A "
    sinUpdate_server SRVGRP=GROUP11 SRVID=80 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinUpdate_server SRVGRP=GROUP12 SRVID=160 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinCount_server SRVGRP=GROUP11 SRVID=240 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinCount_server SRVGRP=GROUP12 SRVID=320 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinSelect_server SRVGRP=GROUP11 SRVID=360 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinSelect_server SRVGRP=GROUP12 SRVID=400 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinInsert_server SRVGRP=GROUP11 SRVID=520 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinInsert_server SRVGRP=GROUP12 SRVID=560 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinDelete_server SRVGRP=GROUP11 SRVID=600 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinDelete_server SRVGRP=GROUP12 SRVID=640 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinDdl_server SRVGRP=GROUP11 SRVID=700 MIN=5 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinDdl_server SRVGRP=GROUP12 SRVID=740 MIN=5 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    lockselect_server SRVGRP=GROUP11 SRVID=800 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    lockselect_server SRVGRP=GROUP12 SRVID=840 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    #mulup_server SRVGRP=GROUP11 SRVID=1 MIN=2 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    #mulup_server SRVGRP=GROUP12 SRVID=60 MIN=2 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinUpdate_server SRVGRP=GROUP13 SRVID=83 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinUpdate_server SRVGRP=GROUP14 SRVID=164 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinCount_server SRVGRP=GROUP13 SRVID=243 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinCount_server SRVGRP=GROUP14 SRVID=324 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinSelect_server SRVGRP=GROUP13 SRVID=363 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinSelect_server SRVGRP=GROUP14 SRVID=404 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinInsert_server SRVGRP=GROUP13 SRVID=523 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinInsert_server SRVGRP=GROUP14 SRVID=564 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinDelete_server SRVGRP=GROUP13 SRVID=603 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinDelete_server SRVGRP=GROUP14 SRVID=644 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinDdl_server SRVGRP=GROUP13 SRVID=703 MIN=5 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    sinDdl_server SRVGRP=GROUP14 SRVID=744 MIN=5 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    lockselect_server SRVGRP=GROUP13 SRVID=803 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    lockselect_server SRVGRP=GROUP14 SRVID=844 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    #mulup_server SRVGRP=GROUP13 SRVID=13 MIN=2 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    #mulup_server SRVGRP=GROUP14 SRVID=64 MIN=2 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y
    WSL SRVGRP=GROUP11 SRVID=1000
    CLOPT="-A -- -n//120.3.8.237:7200 -I 60 -T 60 -w WSH -m 50 -M 100 -x 6 -N 3600"
    WSL SRVGRP=GROUP12 SRVID=1001
    CLOPT="-A -- -n//120.3.8.238:7200 -I 60 -T 60 -w WSH -m 50 -M 100 -x 6 -N 3600"
    WSL SRVGRP=GROUP13 SRVID=1003
    CLOPT="-A -- -n//120.3.8.237:7203 -I 60 -T 60 -w WSH -m 50 -M 100 -x 6 -N 3600"
    WSL SRVGRP=GROUP14 SRVID=1004
    CLOPT="-A -- -n//120.3.8.238:7204 -I 60 -T 60 -w WSH -m 50 -M 100 -x 6 -N 3600"
    if we use MSSQ ,ubb file about MSSQ config is
    *SERVERS
    DEFAULT:
    CLOPT="-A -p 1,60:1,30"
    sinUpdate_server SRVGRP=GROUP11 SRVID=80 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinUpdate11 REPLYQ=Y
    sinUpdate_server SRVGRP=GROUP12 SRVID=160 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinUpdate12 REPLYQ=Y
    sinCount_server SRVGRP=GROUP11 SRVID=240 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinCount11 REPLYQ=Y
    sinCount_server SRVGRP=GROUP12 SRVID=320 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinCount12 REPLYQ=Y
    sinSelect_server SRVGRP=GROUP11 SRVID=360 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinSelec11 REPLYQ=Y
    sinSelect_server SRVGRP=GROUP12 SRVID=400 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinSelect12 REPLYQ=Y
    sinInsert_server SRVGRP=GROUP11 SRVID=520 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinInsert11 REPLYQ=Y
    sinInsert_server SRVGRP=GROUP12 SRVID=560 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinInsert12 REPLYQ=Y
    sinDelete_server SRVGRP=GROUP11 SRVID=600 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinDelete11 REPLYQ=Y
    sinDelete_server SRVGRP=GROUP12 SRVID=640 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinDelete12 REPLYQ=Y
    sinDdl_server SRVGRP=GROUP11 SRVID=700 MIN=5 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinDdl11 REPLYQ=Y
    sinDdl_server SRVGRP=GROUP12 SRVID=740 MIN=5 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinDdl12 REPLYQ=Y
    lockselect_server SRVGRP=GROUP11 SRVID=800 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=lockselect11 REPLYQ=Y
    lockselect_server SRVGRP=GROUP12 SRVID=840 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=lockselect12 REPLYQ=Y
    #mulup_server SRVGRP=GROUP11 SRVID=1 MIN=2 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=mulup11 REPLYQ=Y
    #mulup_server SRVGRP=GROUP12 SRVID=60 MIN=2 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=mulup12 REPLYQ=Y
    sinUpdate_server SRVGRP=GROUP13 SRVID=83 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinUpdate13 REPLYQ=Y
    sinUpdate_server SRVGRP=GROUP14 SRVID=164 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinUpdate14 REPLYQ=Y
    sinCount_server SRVGRP=GROUP13 SRVID=243 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinCount13 REPLYQ=Y
    sinCount_server SRVGRP=GROUP14 SRVID=324 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinCount14 REPLYQ=Y
    sinSelect_server SRVGRP=GROUP13 SRVID=363 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinSelec13 REPLYQ=Y
    sinSelect_server SRVGRP=GROUP14 SRVID=404 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinSelect14 REPLYQ=Y
    sinInsert_server SRVGRP=GROUP13 SRVID=523 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinInsert13 REPLYQ=Y
    sinInsert_server SRVGRP=GROUP14 SRVID=564 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinInsert14 REPLYQ=Y
    sinDelete_server SRVGRP=GROUP13 SRVID=603 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinDelete13 REPLYQ=Y
    sinDelete_server SRVGRP=GROUP14 SRVID=644 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinDelete14 REPLYQ=Y
    sinDdl_server SRVGRP=GROUP13 SRVID=703 MIN=5 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinDdl13 REPLYQ=Y
    sinDdl_server SRVGRP=GROUP14 SRVID=744 MIN=5 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=sinDdl14 REPLYQ=Y
    lockselect_server SRVGRP=GROUP13 SRVID=803 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=lockselect13 REPLYQ=Y
    lockselect_server SRVGRP=GROUP14 SRVID=844 MIN=10 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=lockselect14 REPLYQ=Y
    #mulup_server SRVGRP=GROUP13 SRVID=13 MIN=2 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=mulup13 REPLYQ=Y
    #mulup_server SRVGRP=GROUP14 SRVID=64 MIN=2 MAX=30 MAXGEN=10 GRACE=10 RESTART=Y RQADDR=mulup14 REPLYQ=Y
    WSL SRVGRP=GROUP11 SRVID=1000
    CLOPT="-A -- -n//120.3.8.237:7200 -I 60 -T 60 -w WSH -m 50 -M 100 -x 6 -N 3600"
    WSL SRVGRP=GROUP12 SRVID=1001
    CLOPT="-A -- -n//120.3.8.238:7200 -I 60 -T 60 -w WSH -m 50 -M 100 -x 6 -N 3600"
    WSL SRVGRP=GROUP13 SRVID=1003
    CLOPT="-A -- -n//120.3.8.237:7203 -I 60 -T 60 -w WSH -m 50 -M 100 -x 6 -N 3600"
    WSL SRVGRP=GROUP14 SRVID=1004
    CLOPT="-A -- -n//120.3.8.238:7204 -I 60 -T 60 -w WSH -m 50 -M 100 -x 6 -N 3600"
    about above ubb file ,has any error ? or not correct use MSSQ
    look forward to you answer,thanks.

  • Dbconsole failed to start on one RAC node

    Hi
    I have 2 RAC nodes (RHEL 4) and 10.2.0.1. On one dbconsole is running and on other I get the following. Earlier dbconsole
    on both the nodes used to run perfectly fine. I will appreacite any suggestions to rectify this problem.
    Regards
    oracle@rac01<18>:/u01/app/oracle/product/10.2/db_1/rac01_RACDB1/sysman/log> emctl start dbconsole
    TZ set to Canada/Newfoundland
    Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0
    Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
    http://rac01:1158/em/console/aboutApplication
    Agent Version : 10.1.0.4.1
    OMS Version : Unknown
    Protocol Version : 10.1.0.2.0
    Agent Home : /u01/app/oracle/product/10.2/db_1/rac01_RACDB1
    Agent binaries : /u01/app/oracle/product/10.2/db_1
    Agent Process ID : 23329
    Parent Process ID : 21132
    Agent URL : http://rac01:3938/emd/main
    Started at : 2007-07-25 11:37:32
    Started by user : oracle
    Last Reload : 2007-07-25 11:37:32
    Last successful upload : (none)
    Last attempted upload : (none)
    Total Megabytes of XML files uploaded so far : 0.00
    Number of XML files pending upload : 371
    Size of XML files pending upload(MB) : 7.66
    Available disk space on upload filesystem : 44.78%
    Agent is already started. Will restart the agent
    Stopping agent ... stopped.
    Starting Oracle Enterprise Manager 10g Database Control ............................................................................................. failed.
    Logs are generated in directory /u01/app/oracle/product/10.2/db_1/rac01_RACDB1/sysman/log
    oracle@rac01<19>:/u01/app/oracle/product/10.2/db_1/rac01_RACDB1/sysman/log>
    ON OTHER NODE:
    oracle@rac02<2>:/u01/app/oracle> emctl start dbconsole
    TZ set to Canada/Newfoundland
    Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0
    Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
    http://rac01:1158/em/console/aboutApplication
    Starting Oracle Enterprise Manager 10g Database Control .................................... started.
    Logs are generated in directory /u01/app/oracle/product/10.2/db_1/rac02_RACDB2/sysman/log
    oracle@rac02<3>:/u01/app/oracle>

    Thanks for your time and reply .
    Well, here is what I got, couldn't make out from here.
    Regards
    oracle@rac01<19>:/u01/app/oracle/product/10.2/db_1/rac01_RACDB1/sysman/log> ls -lart
    total 13500
    drwxr----- 7 oracle dba 4096 Jul 14 10:48 ..
    -rw-r----- 1 oracle dba 0 Jul 14 10:48 emdctl.log
    drwxrwx--- 2 oracle dba 4096 Jul 14 10:54 nmcRACDB11521
    -rw-r----- 1 oracle dba 4655792 Jul 24 23:01 emoms.trc
    -rw-r----- 1 oracle dba 4655792 Jul 24 23:01 emoms.log
    drwxr----- 3 oracle dba 4096 Jul 25 11:35 .
    -rw-r----- 1 oracle dba 4096 Jul 25 12:05 emdb.nohup.lr
    -rw-r----- 1 oracle dba 1074 Jul 25 12:05 emagent_perl.trc
    -rw-r----- 1 oracle dba 1731 Jul 25 12:06 emagent.log
    -rw-r----- 1 oracle dba 1080 Jul 25 12:07 emagentfetchlet.trc
    -rw-r----- 1 oracle dba 1080 Jul 25 12:07 emagentfetchlet.log
    -rw-r----- 1 oracle dba 81089 Jul 25 13:28 emdctl.trc
    -rw-r----- 1 oracle dba 3309143 Jul 25 13:28 emdb.nohup
    -rw-r----- 1 oracle dba 1044518 Jul 25 13:28 emagent.trc
    oracle@rac01<20>:/u01/app/oracle/product/10.2/db_1/rac01_RACDB1/sysman/log> cat emagent.log
    2007-07-14 10:50:44 Thread-3086936288 Starting Agent 10.1.0.4.1 from /u01/app/oracle/product/10.2/db_1 (00701)
    2007-07-14 10:51:16 Thread-3086936288 EMAgent started successfully (00702)
    2007-07-14 14:38:21 Thread-3086935744 Starting Agent 10.1.0.4.1 from /u01/app/oracle/product/10.2/db_1 (00701)
    2007-07-14 14:39:00 Thread-3086935744 EMAgent started successfully (00702)
    2007-07-24 07:05:06 Thread-3086935744 Starting Agent 10.1.0.4.1 from /u01/app/oracle/product/10.2/db_1 (00701)
    2007-07-24 07:07:11 Thread-3086935744 target {+ASM1_rac01, osm_instance} is broken: cannot compute dynamic properties in time. (00155)
    2007-07-24 07:07:14 Thread-3086935744 EMAgent started successfully (00702)
    2007-07-24 12:06:27 Thread-3086935744 EMAgent normal shutdown (00703)
    2007-07-24 12:08:26 Thread-3086935744 Starting Agent 10.1.0.4.1 from /u01/app/oracle/product/10.2/db_1 (00701)
    2007-07-24 12:08:51 Thread-3086935744 EMAgent started successfully (00702)
    2007-07-25 11:35:35 Thread-3086935744 EMAgent normal shutdown (00703)
    2007-07-25 11:37:32 Thread-3086935744 Starting Agent 10.1.0.4.1 from /u01/app/oracle/product/10.2/db_1 (00701)
    2007-07-25 11:39:29 Thread-3086935744 target {+ASM1_rac01, osm_instance} is broken: cannot compute dynamic properties in time. (00155)
    2007-07-25 11:39:30 Thread-3086935744 EMAgent started successfully (00702)
    2007-07-25 12:03:36 Thread-3086935744 EMAgent normal shutdown (00703)
    2007-07-25 12:05:15 Thread-3086935744 Starting Agent 10.1.0.4.1 from /u01/app/oracle/product/10.2/db_1 (00701)
    2007-07-25 12:06:23 Thread-3086935744 target {+ASM1_rac01, osm_instance} is broken: cannot compute dynamic properties in time. (00155)
    2007-07-25 12:06:24 Thread-3086935744 EMAgent started successfully (00702)
    oracle@rac01<21>:/u01/app/oracle/product/10.2/db_1/rac01_RACDB1/sysman/log> cat emagentfetchlet.log
    2007-07-14 11:01:44,208 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    2007-07-14 14:40:29,096 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    2007-07-24 07:10:44,123 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    2007-07-24 12:12:48,187 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    2007-07-25 11:41:25,628 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    2007-07-25 12:07:30,335 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    oracle@rac01<22>:/u01/app/oracle/product/10.2/db_1/rac01_RACDB1/sysman/log>
    oracle@rac01<22>:/u01/app/oracle/product/10.2/db_1/rac01_RACDB1/sysman/log> tail -40 emagentfetchlet.trc
    2007-07-14 11:01:44,208 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    2007-07-14 14:40:29,096 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    2007-07-24 07:10:44,123 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    2007-07-24 12:12:48,187 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    2007-07-25 11:41:25,628 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    2007-07-25 12:07:30,335 [main] WARN track.OracleInventory collectInventory.439 - ECM: The inventory location file for the special Windows NT case does not exist or is unreadable.
    oracle@rac01<25>:/u01/app/oracle/product/10.2/db_1/rac01_RACDB1/sysman/log> tail -10 emdctl.trc
    2007-07-25 13:01:02 Thread-3086935744 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:04:41 Thread-3086935744 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:07:12 Thread-3086935744 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:10:50 Thread-3086935744 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:14:32 Thread-3086935744 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:18:09 Thread-3086935744 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:20:40 Thread-3086935744 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:24:27 Thread-3086935744 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:28:06 Thread-3086935744 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:31:43 Thread-3086935744 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    oracle@rac01<28>:/u01/app/oracle/product/10.2/db_1/rac01_RACDB1/sysman/log> tail -10 emagent.trc
    2007-07-25 13:31:44 Thread-43162528 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:31:44 Thread-43162528 ERROR pingManager: nmepm_pingReposURL: Cannot connect to http://rac01:1158/em/upload/: retStatus=-32
    2007-07-25 13:32:14 Thread-74791840 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:32:14 Thread-74791840 ERROR pingManager: nmepm_pingReposURL: Cannot connect to http://rac01:1158/em/upload/: retStatus=-32
    2007-07-25 13:32:14 Thread-74791840 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:32:14 Thread-74791840 ERROR pingManager: nmepm_pingReposURL: Cannot connect to http://rac01:1158/em/upload/: retStatus=-32
    2007-07-25 13:32:44 Thread-74791840 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:32:44 Thread-74791840 ERROR pingManager: nmepm_pingReposURL: Cannot connect to http://rac01:1158/em/upload/: retStatus=-32
    2007-07-25 13:32:44 Thread-74791840 WARN http: snmehl_connect: connect failed to (rac01:1158): Connection refused (error = 111)
    2007-07-25 13:32:44 Thread-74791840 ERROR pingManager: nmepm_pingReposURL: Cannot connect to http://rac01:1158/em/upload/: retStatus=-32
    Message was edited by:
    Singh

  • Listener on RAC node 2 not accepting user connections

    Hey guys i am experiencing a disturbing scenario, i have a 2 RAC node cluster . The problem is that the second listener is not registering any connections . I have verified the services of listener using lsnrctl status (the default name is LISTENER), i also have verified the local and remote listener parameters they are fine but running the fol query shows count =0 against inst_id=2;
    SQL > select count() from gv$session where username='XYZ' and inst_id=2;*
    Count
    0
    Any help is appreciated in this regard please.

    Yeah i know very well the purpose of SCAN . But web admins are sticking to the vip . Well here is the output of lsnrctl status:-
    LSNRCTL for Solaris: Version 11.2.0.1.0 - Production on 11-APR-2013 16:41:55
    Copyright (c) 1991, 2009, Oracle. All rights reserved.
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for Solaris: Version 11.2.0.1.0 - Production
    Start Date 10-MAR-2013 13:00:14
    Uptime 17 days 2 hr. 42 min. 18 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File /u01/app/11.2.0/grid/network/admin/listener.ora
    Listener Log File /u01/app/oracle/diag/tnslsnr/fedb6/listener/alert/log.xml
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=5.2.21.1)(PORT=1521)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=5.2.21.7)(PORT=1521)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
    Instance "+ASM2", status READY, has 1 handler(s) for this service...
    Service "grid1.xyz" has 1 instance(s).
    Instance "grid12", status READY, has 1 handler(s) for this service...
    Service "grid1XDB.xyz" has 1 instance(s).
    Instance "grid12", status READY, has 1 handler(s) for this service...
    Service "xyz" has 1 instance(s).
    Instance "grid12", status READY, has 1 handler(s) for this service...
    The command completed successfully
    and here is tnsnames.ora output :-
    GRID1 =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = grid1-scan)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = grid1.xyz)
    PRIMARY1_SERV =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 5.2.21.1)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = grid1.xyz)
    (INSTANCE_NAME= grid11)
    PRIMARY2_SERV =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 5.2.21.6)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = grid1.xyz)
    (INSTANCE_NAME = grid12)
    )

  • Nslookup scan ip not pingable in rac node

    All,
    I'm planning to install 11.2.0.1 rac on my laptop. As a first step I have configured dns in separate vmware and configured rac1 node as well. Both dns and rac1 public ip addresses are pingable from each others and from host machine.But the rac-scan ip is only pingable from dns server and not pingable from rac1 server.  Will this make any problem if dns server running on 32 bit and rac nodes running on 64 bit server ? Please let me know if I have anything missed here. Thanks again.
    About posting on this forum. I have used [code]  [/code] to format the code previously. But this time it is not working. Also there is no option to preview the code before posting.
    use spaces to separate multiple tags I'm not clear about this. I read https://forums.oracle.com/thread/865295 this article how to post the code. It says to use  \ . If you guide me how to format the code I can use that in future.
    Host OS : Windows 8 64 bit
    Guest OS -1 : dns 32 bit Linux
      [root@dns32 ~]# uname -a
       Linux dns32.testenv.com 2.6.18-164.el5 #1 SMP Thu Sep 3 02:16:47 EDT 2009 i686 i686 i386 GNU/Linux
    Guest OS -2 : rac1 64 bit Linux
      [root@rac1 ~]# uname -a
       Linux rac1 2.6.18-194.el5 #1 SMP Mon Mar 29 22:10:29 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
    Guest OS-3 : rac2 - yet to configure 64 bit Linux
    @ dns server
    [root@dns32 ~]# nslookup rac-scan
    Server:         192.168.1.26
    Address:        192.168.1.26#53
    Name:   rac-scan.testenv.com
    Address: 192.168.1.57
    Name:   rac-scan.testenv.com
    Address: 192.168.1.58
    Name:   rac-scan.testenv.com
    Address: 192.168.1.59
    [root@dns32 ~]# cat /etc/resolv.conf
    search testenv.com
    nameserver 192.168.1.26
    [root@dns32 ~]# ifconfig -a
    eth0      Link encap:Ethernet  HWaddr 00:0C:29:EF:03:D3
              inet addr:192.168.1.26  Bcast:192.168.1.255  Mask:255.255.255.0
              inet6 addr: fe80::20c:29ff:feef:3d3/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:2802 errors:0 dropped:0 overruns:0 frame:0
              TX packets:2691 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:210115 (205.1 KiB)  TX bytes:208344 (203.4 KiB)
              Interrupt:67 Base address:0x2024
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:2308 errors:0 dropped:0 overruns:0 frame:0
              TX packets:2308 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:5494207 (5.2 MiB)  TX bytes:5494207 (5.2 MiB)
    sit0      Link encap:IPv6-in-IPv4
              NOARP  MTU:1480  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
    [root@dns32 ~]# ping 192.168.1.26
    PING 192.168.1.26 (192.168.1.26) 56(84) bytes of data.
    64 bytes from 192.168.1.26: icmp_seq=1 ttl=64 time=0.200 ms
    --- 192.168.1.26 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms
    [root@dns32 ~]# ping 192.168.1.27
    PING 192.168.1.27 (192.168.1.27) 56(84) bytes of data.
    64 bytes from 192.168.1.27: icmp_seq=1 ttl=64 time=0.330 ms
    --- 192.168.1.27 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms
    @rac1 node :
    [root@rac1 ~]# cat /etc/resolv.conf
    search testenv.com
    nameserver 192.168.1.26
    [root@rac1 ~]#  nslookup rac-scan
    ;; connection timed out; no servers could be reached
    [root@rac1 ~]# ifconfig -a
    eth0      Link encap:Ethernet  HWaddr 00:0C:29:75:A9:39
              inet addr:192.168.1.27  Bcast:192.168.1.255  Mask:255.255.255.0
              inet6 addr: fe80::20c:29ff:fe75:a939/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:500 errors:0 dropped:0 overruns:0 frame:0
              TX packets:357 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:52333 (51.1 KiB)  TX bytes:39556 (38.6 KiB)
    eth1      Link encap:Ethernet  HWaddr 00:0C:29:75:A9:43
              inet addr:192.168.2.37  Bcast:192.168.2.255  Mask:255.255.255.0
              inet6 addr: fe80::20c:29ff:fe75:a943/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:160 errors:0 dropped:0 overruns:0 frame:0
              TX packets:50 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:20359 (19.8 KiB)  TX bytes:6518 (6.3 KiB)
    lo        Link encap:Local Loopback
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:1940 errors:0 dropped:0 overruns:0 frame:0
              TX packets:1940 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:4783881 (4.5 MiB)  TX bytes:4783881 (4.5 MiB)
    sit0      Link encap:IPv6-in-IPv4
              NOARP  MTU:1480  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
    [root@rac1 ~]# ping 192.168.1.26
    PING 192.168.1.26 (192.168.1.26) 56(84) bytes of data.
    64 bytes from 192.168.1.26: icmp_seq=1 ttl=64 time=0.284 ms
    64 bytes from 192.168.1.26: icmp_seq=2 ttl=64 time=0.456 ms
    --- 192.168.1.26 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1000ms
    rtt min/avg/max/mdev = 0.284/0.370/0.456/0.086 ms
    [root@rac1 ~]# ping 192.168.1.27
    PING 192.168.1.27 (192.168.1.27) 56(84) bytes of data.
    64 bytes from 192.168.1.27: icmp_seq=1 ttl=64 time=0.032 ms
    --- 192.168.1.27 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms
    Thanks
    Arul

    Thanks Saurabh. I have configured dns using this blog " http://dnccfg.blogspot.in/2012/08/dns-configuration-on-linux.html.html
    [root@dns32 etc]# cat named.conf
    // named.caching-nameserver.conf
    // Provided by Red Hat caching-nameserver package to configure the
    // ISC BIND named(8) DNS server as a caching only nameserver
    // (as a localhost DNS resolver only).
    // See /usr/share/doc/bind*/sample/ for example named configuration files.
    // DO NOT EDIT THIS FILE - use system-config-bind or an editor
    // to create named.conf - edits to this file will be lost on
    // caching-nameserver package upgrade.
    options {
            listen-on port 53 { 192.168.1.26; };
            listen-on-v6 port 53 { ::1; };
            directory       "/var/named";
            dump-file       "/var/named/data/cache_dump.db";
            statistics-file "/var/named/data/named_stats.txt";
            memstatistics-file "/var/named/data/named_mem_stats.txt";
            // Those options should be used carefully because they disable port
            // randomization
            // query-source    port 53;
            // query-source-v6 port 53;
            allow-query     { any; };
            allow-query-cache { localhost; };
    logging {
            channel default_debug {
                    file "data/named.run";
                    severity dynamic;
    view localhost_resolver {
            match-clients      { any; };
            match-destinations { 192.168.1.26; };
            recursion yes;
            include "/etc/named.rfc1912.zones";
    [root@dns32 etc]# cat named.rfc1912.zones
    // named.rfc1912.zones:
    // Provided by Red Hat caching-nameserver package
    // ISC BIND named zone configuration for zones recommended by
    // RFC 1912 section 4.1 : localhost TLDs and address zones
    // See /usr/share/doc/bind*/sample/ for example named configuration files.
    zone "." IN {
            type hint;
            file "named.ca";
    zone "testenv.com" IN {
            type master;
            file "forward.zone";
            allow-update { none; };
    zone "localhost" IN {
            type master;
            file "localhost.zone";
            allow-update { none; };
    zone "1.168.192.in-addr.arpa" IN {
            type master;
            file "reverse.zone";
            allow-update { none; };
    [root@dns32 named]# cat forward.zone
    $TTL    86400
    @               IN SOA  dns32.testenv.com. root.dns32.testenv.com. (
                                            42              ; serial (d. adams)
                                            3H              ; refresh
                                            15M             ; retry
                                            1W              ; expiry
                                            1D )            ; minimum
                    IN NS           dns32.testenv.com.
    dns32     IN A 192.168.1.26
    rac1      IN A 192.168.1.27
    rac2      IN A 192.168.1.28
    rac1-priv  IN A 192.168.2.37
    rac2-priv  IN A 192.168.2.38
    rac1-vip  IN A 192.168.1.47
    rac2-vip  IN A 192.168.1.48
    rac-scan  IN A 192.168.1.57
    rac-scan  IN A 192.168.1.58
    rac-scan  IN A 192.168.1.59
    [root@dns32 named]# cat reverse.zone
    $TTL    86400
    @       IN      SOA     dns32.testenv.com. root.dns32.testenv.com.  (
                                          1997022700 ; Serial
                                          28800      ; Refresh
                                          14400      ; Retry
                                          3600000    ; Expire
                                          86400 )    ; Minimum
            IN      NS      dns32.testenv.com.
    26       IN      PTR     dns32.testenv.com.
    27        IN      PTR   rac1.testenv.com.
    28        IN      PTR   rac2.testenv.com.
    47        IN      PTR   rac1-vip.testenv.com.
    48        IN      PTR   rac2-vip.testenv.com.
    57        IN      PTR   rac-scan.testenv.com.
    58        IN      PTR   rac-scan.testenv.com.
    59        IN      PTR   rac-scan.testenv.com.
    Thanks
    Arul

  • Is it possible to move some of the capture processes to another rac node?

    Hi All,
    Is it possible to move some of the ODI (Oracle Data Integrator) capture processes running on node1 to node2. Once moved does it work as usual or not? If its possible please provide me with steps.
    Appreciate your response
    Best Regards
    SK.

    Hi Cezar,
    Thanks for your post. I have a related question regarding this,
    Is it really necessary to have multiple capture and multiple apply processes? One for each schema in ODI? Because if set to automatic configuration, ODI seems to create a capture and a related apply process for each schema, which I guess leads to our specific performance problem (high cpu etc) I mentioned in my other post: Re: Is it possible to move some of the capture processes to another rac node?
    Is there way to use just one capture and one apply process for all of the schemas in ODI?
    Thanks a million.
    Edited by: oyigit on Nov 6, 2009 5:31 AM

Maybe you are looking for