Two instances are down on four node physical standby.

Hi There,
I am new to oracle.
I got following errors:
ORA-01105: mount is incompatible with mounts by other instances
ORA-01677: standby file name convert parameters differ from other instance
1) I chnged spfile into pfile for 1st instance which is down.
2) Than I changed db_file_name_convert parameter by copying its value from 2nd instances pfile (as this instance is up).
than I converted pfile into spfile.
3) And than I bounced the 1st instance by using srvctl stop -d database dbname -i instancename
srvctl start -d database dbname -i instancename
but it didnt turn on. So I logged into sqlplus and executed startup.
N I got following errors:
ORA-01105: mount is incompatible with mounts by other instances
ORA-01677: standby file name convert parameters differ from other instance
Please let me know what I am doing wrong.
Thanks in advance!!

**This the Pfile of the instance which is down**
NDCPVDDS4.__db_cache_size=1157627904
NDCPVDDS1.__db_cache_size=1224736768
NDCPVDDS3.__db_cache_size=1258291200
NDCPVDDS2.__db_cache_size=1442840576
NDCPVDDS4.__java_pool_size=16777216
NDCPVDDS1.__java_pool_size=16777216
NDCPVDDS3.__java_pool_size=16777216
NDCPVDDS2.__java_pool_size=16777216
NDCPVDDS4.__large_pool_size=16777216
NDCPVDDS1.__large_pool_size=16777216
NDCPVDDS3.__large_pool_size=16777216
NDCPVDDS2.__large_pool_size=16777216
NDCPVDDS4.__shared_pool_size=805306368
NDCPVDDS1.__shared_pool_size=738197504
NDCPVDDS3.__shared_pool_size=704643072
NDCPVDDS2.__shared_pool_size=520093696
NDCPVDDS4.__streams_pool_size=0
NDCPVDDS1.__streams_pool_size=0
NDCPVDDS3.__streams_pool_size=0
NDCPVDDS2.__streams_pool_size=0
*.archive_lag_target=900
*.audit_file_dest='/opt/oracle/product/admin/NDCPVDDS/adump'
*.audit_trail='DB'
*.background_dump_dest='/opt/oracle/product/admin/NDCPVDDS/bdump'
*.cluster_database=true
*.cluster_database_instances=4
*.compatible='10.2.0.3.0'
*.control_files='+DG_NDCCLU35_ORA1/ndcpvdds/control01.ctl','+DG_NDCCLU35_ORA2/nd cpvdds/control02.ctl','+DG_NDCCLU35_ORA3/ndcpvdds/control03.ctl'#Restore Control file
*.core_dump_dest='/opt/oracle/product/admin/NDCPVDDS/cdump'
*.cursor_sharing='FORCE'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+DG_PDCCLU17_ORA1/PDCVDDSP','+DG_NDCCLU35_ORA1/NDCPVDDS' ,'+DG_PDCCLU17_ORA2/PDCVDDSP','+DG_NDCCLU35_ORA2/NDCPVDDS','+DG_PDCCLU17_ORA3/PD CVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS','+DG_PDCCLU17_ORA4/PDCVDDSP','+DG_NDCCLU35_ ORA1/NDCPVDDS','+DG_PDCCLU17_ORA5/PDCVDDSP','+DG_NDCCLU35_ORA2/NDCPVDDS','+DG_PD CCLU17_ORA6/PDCVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS','+DG_NDCCLU7R05_ORA1/NDCVDDSP ','+DG_NDCCLU35_ORA1/NDCPVDDS','+DG_NDCCLU7R05_ORA2/NDCVDDSP','+DG_NDCCLU35_ORA2 /NDCPVDDS','+DG_NDCCLU7R05_ORA3/NDCVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS'
*.db_name='PDCVDDSP'
*.db_unique_name='NDCPVDDS'
*.dg_broker_config_file1='+DG_NDCCLU35_ORA2/ndcpvdds/dr1ndcpvdds.dat'
*.dg_broker_config_file2='+DG_NDCCLU35_ORA2/ndcpvdds/dr2ndcpvdds.dat'
*.dg_broker_start=TRUE
*.dispatchers='(PROTOCOL=TCP) (SERVICE=NDCPVDDSXDB)'
*.fal_client='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=ndcgrid36v ip)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=ndcpvdds_XPT)(INSTANCE_NAME=NDCPVDDS 2)(SERVER=dedicated)))'
*.fal_server='pdcvddsp1'
NDCPVDDS1.instance_number=1
NDCPVDDS2.instance_number=2
NDCPVDDS3.instance_number=3
NDCPVDDS4.instance_number=4
*.job_queue_processes=10
NDCPVDDS1.local_listener='LISTENER_NDCPVDDS1'
NDCPVDDS2.local_listener='LISTENER_NDCPVDDS2'
NDCPVDDS3.local_listener='LISTENER_NDCPVDDS3'
NDCPVDDS4.local_listener='LISTENER_NDCPVDDS4'
*.log_archive_config='dg_config=(PDCVDDSP)'
*.log_archive_dest_1='location="+DG_NDCCLU35_ARCH/NDCPVDDS/arch", valid_for=(ONL INE_LOGFILES,ALL_ROLES)'
*.log_archive_dest_2='SERVICE=NDCVDDSP LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRI MARY_ROLE) DB_UNIQUE_NAME=NDCVDDSP'
*.log_archive_dest_3='service="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP )(HOST=pdcgrid17vip)(PORT=1800)))(CONNECT_DATA=(SERVICE_NAME=pdcvddsp_XPT)(INSTA NCE_NAME=PDCVDDSP1)(SERVER=dedicated)))"',' LGWR ASYNC NOAFFIRM delay=0 OPTION AL max_failure=0 max_connections=1 reopen=300 db_unique_name="pdcvddsp" regist er net_timeout=180 valid_for=(online_logfiles,primary_role)'
NDCPVDDS1.log_archive_dest_4='location="+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/ "',' valid_for=(STANDBY_LOGFILE,STANDBY_ROLE)'
NDCPVDDS4.log_archive_dest_4='location="+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/ "',' valid_for=(STANDBY_LOGFILE,STANDBY_ROLE)'
NDCPVDDS2.log_archive_dest_4='location="+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/ "',' valid_for=(STANDBY_LOGFILE,STANDBY_ROLE)'
NDCPVDDS3.log_archive_dest_4='location="+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/ "',' valid_for=(STANDBY_LOGFILE,STANDBY_ROLE)'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_dest_state_3='ENABLE'
NDCPVDDS1.log_archive_dest_state_4='ENABLE'
NDCPVDDS4.log_archive_dest_state_4='ENABLE'
NDCPVDDS2.log_archive_dest_state_4='ENABLE'
NDCPVDDS3.log_archive_dest_state_4='ENABLE'
*.log_archive_format='PDCVDDSP_%t_%s_%r.arc'
NDCPVDDS1.log_archive_format='PDCVDDSP_%t_%s_%r.arc'
NDCPVDDS4.log_archive_format='PDCVDDSP_%t_%s_%r.arc'
NDCPVDDS2.log_archive_format='PDCVDDSP_%t_%s_%r.arc'
NDCPVDDS3.log_archive_format='PDCVDDSP_%t_%s_%r.arc'
*.log_archive_max_processes=4
*.log_archive_min_succeed_dest=1
*.log_archive_trace=0
NDCPVDDS1.log_archive_trace=0
NDCPVDDS4.log_archive_trace=0
NDCPVDDS2.log_archive_trace=0
NDCPVDDS3.log_archive_trace=0
*.log_file_name_convert='+DG_PDCCLU17_ORA1/PDCVDDSP','+DG_NDCCLU35_ORA1/NDCPVDDS ','+DG_PDCCLU17_ORA2/PDCVDDSP','+DG_NDCCLU35_ORA2/NDCPVDDS','+DG_PDCCLU17_ORA3/P DCVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS','+DG_PDCCLU17_ORA4/PDCVDDSP','+DG_NDCCLU35 ORA1/NDCPVDDS','+DGPDCCLU17_ORA5/PDCVDDSP','+DG_NDCCLU35_ORA2/NDCPVDDS','+DG_P DCCLU17_ORA6/PDCVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS','+DG_NDCCLU7R05_ORA1/NDCVDDS P','+DG_NDCCLU35_ORA1/NDCPVDDS','+DG_NDCCLU7R05_ORA2/NDCVDDSP','+DG_NDCCLU35_ORA 2/NDCPVDDS','+DG_NDCCLU7R05_ORA3/NDCVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS'
*.max_dump_file_size='UNLIMITED'
*.nls_date_format='yyyy-mm-dd hh24:mi:ss'
*.open_cursors=400
*.parallel_max_servers=0
*.pga_aggregate_target=833617920
*.processes=1500
*.remote_listener='LISTENERS_NDCPVDDS'
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=1655
*.sga_max_size=2013265920
*.sga_target=2013265920
*.sort_area_retained_size=2097152
*.sort_area_size=2097152
*.standby_archive_dest='+DG_NDCCLU35_ARCH/NDCPVDDS/standby_arch/'
NDCPVDDS1.standby_archive_dest='+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/'
NDCPVDDS4.standby_archive_dest='+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/'
NDCPVDDS2.standby_archive_dest='+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/'
NDCPVDDS3.standby_archive_dest='+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/'
*.standby_file_management='AUTO'
NDCPVDDS1.thread=1
NDCPVDDS2.thread=2
NDCPVDDS3.thread=3
NDCPVDDS4.thread=4
*.undo_management='AUTO'
NDCPVDDS1.undo_tablespace='UNDOTBS1'
NDCPVDDS2.undo_tablespace='UNDOTBS2'
NDCPVDDS3.undo_tablespace='UNDOTBS3'
NDCPVDDS4.undo_tablespace='UNDOTBS4'
*.user_dump_dest='/opt/oracle/product/admin/NDCPVDDS/udump'
This is the parameter file of the instance which is up:
NDCPVDDS4.__db_cache_size=1157627904
NDCPVDDS1.__db_cache_size=1224736768
NDCPVDDS3.__db_cache_size=1258291200
NDCPVDDS2.__db_cache_size=1442840576
NDCPVDDS4.__java_pool_size=16777216
NDCPVDDS1.__java_pool_size=16777216
NDCPVDDS3.__java_pool_size=16777216
NDCPVDDS2.__java_pool_size=16777216
NDCPVDDS4.__large_pool_size=16777216
NDCPVDDS1.__large_pool_size=16777216
NDCPVDDS3.__large_pool_size=16777216
NDCPVDDS2.__large_pool_size=16777216
NDCPVDDS4.__shared_pool_size=805306368
NDCPVDDS1.__shared_pool_size=738197504
NDCPVDDS3.__shared_pool_size=704643072
NDCPVDDS2.__shared_pool_size=520093696
NDCPVDDS4.__streams_pool_size=0
NDCPVDDS1.__streams_pool_size=0
NDCPVDDS3.__streams_pool_size=0
NDCPVDDS2.__streams_pool_size=0
*.archive_lag_target=900
*.audit_file_dest='/opt/oracle/product/admin/NDCPVDDS/adump'
*.audit_trail='DB'
*.background_dump_dest='/opt/oracle/product/admin/NDCPVDDS/bdump'
*.cluster_database=true
*.cluster_database_instances=4
*.compatible='10.2.0.3.0'
*.control_files='+DG_NDCCLU35_ORA1/ndcpvdds/control01.ctl','+DG_NDCCLU35_ORA2/nd cpvdds/control02.ctl','+DG_NDCCLU35_ORA3/ndcpvdds/control03.ctl'#Restore Control file
*.core_dump_dest='/opt/oracle/product/admin/NDCPVDDS/cdump'
*.cursor_sharing='FORCE'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+DG_PDCCLU17_ORA1/PDCVDDSP','+DG_NDCCLU35_ORA1/NDCPVDDS' ,'+DG_PDCCLU17_ORA2/PDCVDDSP','+DG_NDCCLU35_ORA2/NDCPVDDS','+DG_PDCCLU17_ORA3/PD CVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS','+DG_PDCCLU17_ORA4/PDCVDDSP','+DG_NDCCLU35_ ORA1/NDCPVDDS','+DG_PDCCLU17_ORA5/PDCVDDSP','+DG_NDCCLU35_ORA2/NDCPVDDS','+DG_PD CCLU17_ORA6/PDCVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS','+DG_NDCCLU7R05_ORA1/NDCVDDSP ','+DG_NDCCLU35_ORA1/NDCPVDDS','+DG_NDCCLU7R05_ORA2/NDCVDDSP','+DG_NDCCLU35_ORA2 /NDCPVDDS','+DG_NDCCLU7R05_ORA3/NDCVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS'
*.db_name='PDCVDDSP'
*.db_unique_name='NDCPVDDS'
*.dg_broker_config_file1='+DG_NDCCLU35_ORA2/ndcpvdds/dr1ndcpvdds.dat'
*.dg_broker_config_file2='+DG_NDCCLU35_ORA2/ndcpvdds/dr2ndcpvdds.dat'
*.dg_broker_start=TRUE
*.dispatchers='(PROTOCOL=TCP) (SERVICE=NDCPVDDSXDB)'
*.fal_client='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=ndcgrid36v ip)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=ndcpvdds_XPT)(INSTANCE_NAME=NDCPVDDS 2)(SERVER=dedicated)))'
*.fal_server='pdcvddsp1'
NDCPVDDS1.instance_number=1
NDCPVDDS2.instance_number=2
NDCPVDDS3.instance_number=3
NDCPVDDS4.instance_number=4
*.job_queue_processes=10
NDCPVDDS1.local_listener='LISTENER_NDCPVDDS1'
NDCPVDDS2.local_listener='LISTENER_NDCPVDDS2'
NDCPVDDS3.local_listener='LISTENER_NDCPVDDS3'
NDCPVDDS4.local_listener='LISTENER_NDCPVDDS4'
*.log_archive_config='dg_config=(PDCVDDSP)'
*.log_archive_dest_1='location="+DG_NDCCLU35_ARCH/NDCPVDDS/arch", valid_for=(ONL INE_LOGFILES,ALL_ROLES)'
*.log_archive_dest_2='SERVICE=NDCVDDSP LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRI MARY_ROLE) DB_UNIQUE_NAME=NDCVDDSP'
*.log_archive_dest_3='service="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP )(HOST=pdcgrid17vip)(PORT=1800)))(CONNECT_DATA=(SERVICE_NAME=pdcvddsp_XPT)(INSTA NCE_NAME=PDCVDDSP1)(SERVER=dedicated)))"',' LGWR ASYNC NOAFFIRM delay=0 OPTION AL max_failure=0 max_connections=1 reopen=300 db_unique_name="pdcvddsp" regist er net_timeout=180 valid_for=(online_logfiles,primary_role)'
NDCPVDDS1.log_archive_dest_4='location="+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/ "',' valid_for=(STANDBY_LOGFILE,STANDBY_ROLE)'
NDCPVDDS4.log_archive_dest_4='location="+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/ "',' valid_for=(STANDBY_LOGFILE,STANDBY_ROLE)'
NDCPVDDS2.log_archive_dest_4='location="+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/ "',' valid_for=(STANDBY_LOGFILE,STANDBY_ROLE)'
NDCPVDDS3.log_archive_dest_4='location="+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/ "',' valid_for=(STANDBY_LOGFILE,STANDBY_ROLE)'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_dest_state_3='ENABLE'
NDCPVDDS1.log_archive_dest_state_4='ENABLE'
NDCPVDDS4.log_archive_dest_state_4='ENABLE'
NDCPVDDS2.log_archive_dest_state_4='ENABLE'
NDCPVDDS3.log_archive_dest_state_4='ENABLE'
*.log_archive_format='PDCVDDSP_%t_%s_%r.arc'
NDCPVDDS1.log_archive_format='PDCVDDSP_%t_%s_%r.arc'
NDCPVDDS4.log_archive_format='PDCVDDSP_%t_%s_%r.arc'
NDCPVDDS2.log_archive_format='PDCVDDSP_%t_%s_%r.arc'
NDCPVDDS3.log_archive_format='PDCVDDSP_%t_%s_%r.arc'
*.log_archive_max_processes=4
*.log_archive_min_succeed_dest=1
*.log_archive_trace=0
NDCPVDDS1.log_archive_trace=0
NDCPVDDS4.log_archive_trace=0
NDCPVDDS2.log_archive_trace=0
NDCPVDDS3.log_archive_trace=0
*.log_file_name_convert='+DG_PDCCLU17_ORA1/PDCVDDSP','+DG_NDCCLU35_ORA1/NDCPVDDS ','+DG_PDCCLU17_ORA2/PDCVDDSP','+DG_NDCCLU35_ORA2/NDCPVDDS','+DG_PDCCLU17_ORA3/P DCVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS','+DG_PDCCLU17_ORA4/PDCVDDSP','+DG_NDCCLU35 ORA1/NDCPVDDS','+DGPDCCLU17_ORA5/PDCVDDSP','+DG_NDCCLU35_ORA2/NDCPVDDS','+DG_P DCCLU17_ORA6/PDCVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS','+DG_NDCCLU7R05_ORA1/NDCVDDS P','+DG_NDCCLU35_ORA1/NDCPVDDS','+DG_NDCCLU7R05_ORA2/NDCVDDSP','+DG_NDCCLU35_ORA 2/NDCPVDDS','+DG_NDCCLU7R05_ORA3/NDCVDDSP','+DG_NDCCLU35_ORA3/NDCPVDDS'
*.max_dump_file_size='UNLIMITED'
*.nls_date_format='yyyy-mm-dd hh24:mi:ss'
*.open_cursors=400
*.parallel_max_servers=0
*.pga_aggregate_target=833617920
*.processes=1500
*.remote_listener='LISTENERS_NDCPVDDS'
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=1655
*.sga_max_size=2013265920
*.sga_target=2013265920
*.sort_area_retained_size=2097152
*.sort_area_size=2097152
*.standby_archive_dest='+DG_NDCCLU35_ARCH/NDCPVDDS/standby_arch/'
NDCPVDDS1.standby_archive_dest='+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/'
NDCPVDDS4.standby_archive_dest='+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/'
NDCPVDDS2.standby_archive_dest='+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/'
NDCPVDDS3.standby_archive_dest='+DG_NDCCLU35_ARCH/ndcpvdds/standby_arch/'
*.standby_file_management='AUTO'
NDCPVDDS1.thread=1
NDCPVDDS2.thread=2
NDCPVDDS3.thread=3
NDCPVDDS4.thread=4
*.undo_management='AUTO'
NDCPVDDS1.undo_tablespace='UNDOTBS1'
NDCPVDDS2.undo_tablespace='UNDOTBS2'
NDCPVDDS3.undo_tablespace='UNDOTBS3'
NDCPVDDS4.undo_tablespace='UNDOTBS4'
*.user_dump_dest='/opt/oracle/product/admin/NDCPVDDS/udump'
Parameters seems identical to me. Please correct me if I am wrong.
Thanks.

Similar Messages

  • Can not get two instances to join same cluster even on same machine

    On a RedHat Linux box, I have failed to get two instances of coherence to join the same cluster. I have managed to get the muticast test tool to show that packets are being sent and received. To do this, I had to:
    java -cp bin/tangasol.jar -Djava.net.preferIPv4Stack=true com.tangosol.net.MulticastTest
    Wed Apr 15 21:02:45 WET 2009: Sent packet 1.
    Wed Apr 15 21:02:45 WET 2009: Received test packet 1 from self (sent 7ms ago).
    Wed Apr 15 21:02:47 WET 2009: Sent packet 2.
    Wed Apr 15 21:02:47 WET 2009: Received test packet 2 from self
    Wed Apr 15 21:02:49 WET 2009: Sent packet 3.
    Wed Apr 15 21:02:49 WET 2009: Received test packet 3 from self (sent 1ms ago).
    Wed Apr 15 21:02:51 WET 2009: Sent packet 4.
    Wed Apr 15 21:02:51 WET 2009: Received test packet 4 from self
    However, I could not get the following to show that two instances are joining the same cluster... When I start to instances, both of them create a new cluster with only one member in each.
    java -Djava.net.preferIPv4Stack=true -jar lib/coherence.jar
    and obviously, when I try to start two instances of the sample application, I get the same problem.
    java -cp ./lib/coherence.jar:./lib/tangosol.jar:./examples/java -Djava.net.preferIPv4Stack=true -Dtangosol.coherence.localhost=172.16.27.10 -Dtangosol.coherence.localport=8188 -Dtangosol.coherence.cacheconfig=/cache/explore-config.xml com.tangosol.examples.explore.SimpleCacheExplorer

    Thanks for that... I ran:
    jdk1.6.0_13/bin/java -Dtangosol.coherence.log.level=6 -Dtangosol.coherence.log=/my1.log -Dtangosol.ccacheconfig=/cache/explore-config.xml -Djava.net.preferIPv4Stack=true -jar lib/coherence.jar
    and then
    jdk1.6.0_13/bin/java -Dtangosol.coherence.log.level=6 -Dtangosol.coherence.log=/my2.log -Dtangosol.ccacheconfig=/cache/explore-config.xml -Djava.net.preferIPv4Stack=true -jar lib/coherence.jar
    from the same machine and get the following from the log file of the second run (my2.log)
    Oracle Coherence Version 3.4.2/411
    Grid Edition: Development mode
    Copyright (c) 2000-2009 Oracle. All rights reserved.
    2009-04-16 06:53:11.574/0.625 Oracle Coherence GE 3.4.2/411 <Warning> (thread=main, member=n/a): UnicastUdpSocket failed to set receive buffer size to 1428 packets (2096304 bytes); actual size is 89 packets (131071 bytes). Consult your OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performance.
    2009-04-16 06:53:11.660/0.711 Oracle Coherence GE 3.4.2/411 <D5> (thread=Cluster, member=n/a): Service Cluster joined the cluster with senior service member n/a
    2009-04-16 06:53:14.892/3.943 Oracle Coherence GE 3.4.2/411 <Info> (thread=Cluster, member=n/a): Created a new cluster "cluster:0x2FFB" with Member(Id=1, Timestamp=2009-04-16 06:53:11.58, Address=192.168.1.7:8089, MachineId=26887, Location=process:3514, Role=CoherenceConsole, Edition=Grid Edition, Mode=Development, CpuCount=8, SocketCount=2) UID=0xC0A8010700000120ADB3521C69071F99
    SafeCluster: Name=cluster:0x2FFB
    Group{Address=224.3.4.2, Port=34411, TTL=4}
    MasterMemberSet
    ThisMember=Member(Id=1, Timestamp=2009-04-16 06:53:11.58, Address=192.168.1.7:8089, MachineId=26887, Location=process:3514, Role=CoherenceConsole)
    OldestMember=Member(Id=1, Timestamp=2009-04-16 06:53:11.58, Address=192.168.1.7:8089, MachineId=26887, Location=process:3514, Role=CoherenceConsole)
    ActualMemberSet=MemberSet(Size=1, BitSetCount=2
    Member(Id=1, Timestamp=2009-04-16 06:53:11.58, Address=192.168.1.7:8089, MachineId=26887, Location=process:3514, Role=CoherenceConsole)
    RecycleMillis=120000
    RecycleSet=MemberSet(Size=0, BitSetCount=0
    Services
    TcpRing{TcpSocketAccepter{State=STATE_OPEN, ServerSocket=192.168.1.7:8089}, Connections=[]}
    ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.4, OldestMemberId=1}
    the contents of the xml file are:
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <!--
    Caches with any name will be created as default replicated.
    -->
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>default-replicated</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <!--
    Default Replicated caching scheme.
    -->
    <replicated-scheme>
    <scheme-name>default-replicated</scheme-name>
    <service-name>ReplicatedCache</service-name>
    <backing-map-scheme>
    <class-scheme>
    <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
    </backing-map-scheme>
    </replicated-scheme>
    <!--
    Default backing map scheme definition used by all
    The caches that do not require any eviction policies
    -->
    <class-scheme>
    <scheme-name>default-backing-map</scheme-name>
    <class-name>com.tangosol.util.SafeHashMap</class-name>
    </class-scheme>
    </caching-schemes>
    </cache-config>

  • RAC & DataGuard (Physical Standby)

    Hello all,
    I'm trying to get a high level overview of how RAC & DataGuard would behave in the following configuration. I've written down my understanding of how things would work. Please correct me if I'm wrong.
    1) 2 node RAC (Primary Database) with a single instance physical standby.
    a) Same standby related init.ora parameters would have to configured on both primary rac nodes.
    b) The redo apply service at the standby would merge the redo from the 2 threads from the primary and apply it to the standby to keep it in sync.
    c) During switch over only one primary RAC instance should be up besides the standby instance.
    d) During switch back again only one primary RAC instance should be up besides the standby instance.
    e) During failover ofcourse both primary instances would be down which warrants the fail over.
    2) 2 node RAC (Primary) with a 2 node physical standby
    This where it gets complex and I'm not really sure how a,b,c,d,e would be in this scenario. Appreciate if you could shed some light on this and list what
    a,b,c,d,e would look like in this scenario.
    I'm assuming that only one instance in the standby RAC should be up when the standby is in RAC configuration for the redo apply to work.
    Also, if there is a white paper that details step by step procedure for setting up the above 2 scenarios, please let me know. So far I was able to find only the MAA white paper but that was not very helpful. If you can prescribe a good book for RAC & DataGuard that would be great too..
    Thanks for your help

    >
    1) 2 node RAC (Primary Database) with a single instance physical standby.
    a) Same standby related init.ora parameters would have to configured on both primary rac nodes.Usually rac nodes share their spfile on a shared diskvolume or through ASM. So they have identical DG parameter anyway
    b) The redo apply service at the standby would merge the redo from the 2 threads from the primary and apply it to the standby to keep it in sync.Correct
    c) During switch over only one primary RAC instance should be up besides the standby instance.
    Sounds logical to me
    Edit: In fact during normal operation both RAC nodes are up, so during switchover they are both active too
    It is the configuration that matters. As soon as the standby is becoming primary, it might be that it talks only to one RAC node for its log-apply files
    d) During switch back again only one primary RAC instance should be up besides the standby instance.That single was the only one running in step c.
    e) During failover ofcourse both primary instances would be down which warrants the fail over.Correct. Keep in mind that during a failover the RAC config would be deconfigured from the DG setup and has to be added back again as standby.
    You might have to take a look at: http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10g_RACPrimarySingleInstancePhysicalStandby.pdf
    >
    2) 2 node RAC (Primary) with a 2 node physical standby
    This where it gets complex and I'm not really sure how a,b,c,d,e would be in this scenario. Appreciate if you could shed some light on this and list what
    a,b,c,d,e would look like in this scenario.
    I'm assuming that only one instance in the standby RAC should be up when the standby is in RAC configuration for the redo apply to work.
    No. RAC primary to RAC Standby would mean both nodes are up on both side. PrimNode1 send its redo logs to StandbyNode1 and the same for PrimNode2 and StandbyNode2
    Have a look here: http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10g_RACPrimaryRACPhysicalStandby.pdf
    HTH,
    FJFranken
    My Blog: http://managingoracle.blogspot.com
    Edited by: fjfranken on 16-jul-2010 1:13

  • Question regarding removal of physical standby database?

    We are running a 2 node rac cluster on linux itanium and we have setup one physical standby server. We did not use the data guard broker to setup the standby database we simply set 2 parameters log_archive_config and log_archive_dest_2.
    We now are looking to remove the physical standby server and I see that the 2 parameters are dynamic so I was trying to find out if it is as simple as running the alter system comands to reove the 2 parameters?
    I gues my real question is more about the syntax of removing the parameters?
    Would this syntax be correct?
    alter system set log_archive_config='' scope=both sid='*';
    alter system set log_archive_dest_2='' scope=both sid='*';
    and does it make a difference which parameter is removed first?
    Thanks I appreciate any help.

    the standby system is also a 2 node cluster but only one of the nodes has the instance up in a mount state running in recover mode and the other instance is simply shutdown.
    Thanks.

  • Dataguard physical standby archive log question

    Hi all,
    I will try to keep this simple..
    I have a 4 node RAC primary shipping logs to a 2 node physical standby.
    On the primary when I run 'alter system archive log current' on an instance I only see 1 log being applied on the standby, that is by querying v$archived_log.
    If I run the following on the standby:
    select thread#,sequence#,substr(name,43,70)"NAME",registrar,applied,status,first_time from v$archived_log where first_time
    in
    (select max(first_time) from v$archived_log group by thread#)
    order by thread#
    I get:
    THREAD# SEQUENCE# NAME REGISTR APPLIED S FIRST_TIME
    1 602 thread_1_seq_602.2603.721918617 RFS YES A 17-jun-2010 12:56:58
    2 314 thread_2_seq_314.2609.721918627 RFS NO A 17-jun-2010 12:56:59
    3 311 thread_3_seq_311.2604.721918621 RFS NO A 17-jun-2010 12:57:00
    4 319 thread_4_seq_319.2606.721918625 RFS NO A 17-jun-2010 12:57:00
    Why do we only see the max(sequence#) having been applied and not all of them?
    This is the same no matter how many times I archive the current log files on any of the instances on the primary and also the standby does not have any gaps.
    Hope this is clear..
    any ideas?
    jd

    ok output from gv$archived_log on standby BEFORE 'alter system archive log current' on primary
    THREAD# SEQUENCE# NAME REGISTR APPLIED S FIRST_TIME
    1 679 thread_1_seq_679.1267.722001505 RFS NO A 18-jun-2010 11:58:22
    1 679 thread_1_seq_679.1267.722001505 RFS NO A 18-jun-2010 11:58:22
    2 390 thread_2_seq_390.1314.722001507 RFS NO A 18-jun-2010 11:58:23
    2 390 thread_2_seq_390.1314.722001507 RFS NO A 18-jun-2010 11:58:23
    3 386 thread_3_seq_386.1266.722001505 RFS YES A 18-jun-2010 11:58:22
    3 386 thread_3_seq_386.1266.722001505 RFS YES A 18-jun-2010 11:58:22
    4 393 thread_4_seq_393.1269.722001507 RFS NO A 18-jun-2010 11:58:23
    4 393 thread_4_seq_393.1269.722001507 RFS NO A 18-jun-2010 11:58:23
    Output from v$archived_log on standby AFTER 'alter system archive log current' on primary
    THREAD# SEQUENCE# NAME REGISTR APPLIED S FIRST_TIME
    1 680 thread_1_seq_680.1333.722004227 RFS NO A 18-jun-2010 11:58:29
    1 680 thread_1_seq_680.1333.722004227 RFS NO A 18-jun-2010 11:58:29
    2 391 thread_2_seq_391.1332.722004227 RFS NO A 18-jun-2010 11:58:30
    2 391 thread_2_seq_391.1332.722004227 RFS NO A 18-jun-2010 11:58:30
    3 387 thread_3_seq_387.1271.722004225 RFS NO A 18-jun-2010 11:58:28
    3 387 thread_3_seq_387.1271.722004225 RFS NO A 18-jun-2010 11:58:28
    4 394 thread_4_seq_394.1270.722004225 RFS YES A 18-jun-2010 11:58:29
    4 394 thread_4_seq_394.1270.722004225 RFS YES A 18-jun-2010 11:58:29
    as a reminder we have a 4 node RAC system shipping logs to a 2 node RAC standby. There are no gaps but only one log is ever registered as being applied.
    Why is that? Why arnt all logs registered as being applied?

  • Physical Standby Database with slight complication

    Hello Database experts,
    Primary is a multi-node RAC instance , primarily two distinct schemas (for simplicity sake schema A on tablespace A1/A2 and schema B on tablespace B1/B2). There is a proposed physical standby database. Now primary is Enterprise Data warehouse with millions of updates per minute. Is it possible to somehow bypass objects owned by schema A to be propogated on DR instance.
    Any workaround possible?
    Assume that Hot backup of primary database backup has to be done so we can not perform nologging operations on schema A and Logical standby is not a option.
    Kind Regards,
    Sunil

    ***Is it possible to somehow bypass objects owned by schema A to be propogated on DR instance.***
    Yes. but not in physical standby. While bypassing the objects to physical standby the recovery cannot be enabled at the time. So data loss may occur. Try to configure logical standby database. you can get what you expect.
    thanks

  • DB Link to Physical Standby

    How can we execute explain plan for queries connecting to Physical standby database over DB link.
    explain plan for select * from test.test_table@stby;
    Getting ORA-16000 when trying to that.
    Tried to set transaction to read only.
    Then tried to run explain plan, that gives error for ora-02047 cannot join the distributed
    Any insight will be appreciated.
    MM

    You cannot use "explain plan" as it means inserting into "plan_table" table. So your standby is in read only mode, isnt it :)
    Activate the sql trace and execute the query
    Best would be convert your standby to snapshot standby :), Use it for any kind of testing like explain plan or even run that query and monitor, when you are satisfied come back to physical standby.
    Perform the following steps to convert a physical standby database into a snapshot standby database:
    Stop Redo Apply, if it is active.
    On an Oracle Real Applications Cluster (RAC) database, shut down all but one instance.
    Ensure that the database is mounted, but not open.
    Issue the following SQL statement to perform the conversion:
    SQL> ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;
    The database is dismounted after conversion and must be restarted.
    Edited by: Karan on Feb 8, 2013 12:00 AM

  • RAC to RAC physical standby DB creation

    Oracle version : 11.2.0.2
    Platform : Solaris
    I have created Physical Standby DBs of standalone DBs using RMAN's DUPLICATE command. Now , I need to create a Physical Standby
    for a 2-node RAC primary DB. The standby will also be in a 2-node RAC.
    In the standby site, Grid is succesfully installed , ASM instances are up and Diskgroup for the standby DB is ready. What are the extra changes that I need to make for RAC to RAC standby DB creation.
    Currently, this is what I am planning to do
    After
    1. Establishing connectivity from standby to primary
    and
    2. Moving backup to standby
    and
    3. Starting the standby instance (not MOUNTing it)
    From Node1 of standby server, I will run
    primdb -- The tns entry to connect to Primary DB
    $RMAN target sys/mypass@primdb auxiliary /
    RMAN> duplicate target database for standby dorecover;

    Cedar wrote:
    Oracle version : 11.2.0.2
    Platform : Solaris
    I have created Physical Standby DBs of standalone DBs using RMAN's DUPLICATE command. Now , I need to create a Physical Standby
    for a 2-node RAC primary DB. The standby will also be in a 2-node RAC.
    In the standby site, Grid is succesfully installed , ASM instances are up and Diskgroup for the standby DB is ready. What are the extra changes that I need to make for RAC to RAC standby DB creation.
    Currently, this is what I am planning to do
    After
    1. Establishing connectivity from standby to primary
    and
    2. Moving backup to standby
    and
    3. Starting the standby instance (not MOUNTing it)
    From Node1 of standby server, I will run
    primdb -- The tns entry to connect to Primary DB
    $RMAN target sys/mypass@primdb auxiliary /
    RMAN> duplicate target database for standby dorecover;
    Whatever procedure you are looking, All provided by Mseberg
    Addition to that,
    As you are in 11g, You can perform duplicate from active database without backup. here you no need to copy those backups to standby server.
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-rac-standby-133152.pdf
    http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmdupdb.htm#BRADV89929

  • Test physical standby database in dataguard

    Hi all,
    I had a successfull dataguard implementation for oracle 9r2.
    Primary and standby databases are on different nodes, logs from primay are shipping and applying to physical standby database successfully.
    As these database are for oracle 11i ebs, so for testing dataguard status i had created a table manually having columns date and remarks. Periodically i insert few records in this table in primary database and then chek this tabel in standby database (by opening in read only mode).
    Now i want to test physical standby database with applications tier  but don't want to disturb origional primary database.
    as these are the normal switchover steps
    1) at primary db
    ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    2) at standby
    ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    3) at primary, now this will be the stand by database
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    4) at standby, now this will be the primary database
    SHUTDOWN IMMEDIATE;
    STARTUP
    My concern is if i skip step 1 and 3 and only execute step 2 and 4, will that be applicable for my requirement, or there is some other way to achieve this goal.
    Regards,
    Ken

    To avoid touching the primary production DB, you may try to use the following command on your standby database side:
    alter database activate standby database;
    This will turn your standby database to primary database, however the log will be reset and this means you will need to rebuild your DR.
    Another way is to use flash back feature on your standby database, activate it and revert back to the flashback point when the DR testing is done.

  • Backup physical standby database

    Hi,
    as for a backup & restore concept we are planning to implement a physical standby database using Oracle 10g on Sun Solaris.
    The idea is to use the standby database as backup source.
    What are the required steps and constraints, do i have to stop log apply during backups or change the database state from recovery mode ?
    Thanks,
    Robert

    Hi,
    Its a good way to reduce load of backup on Live node.
    According to the your plan:
    If the live node has failes Standby will be failed over or switched over to production.
    Then through RMAN we can restore the standby database from It's backup , and the standby is back.
    Is this correct/Feasible....?
    Regards,
    Ven.

  • Applying cpu with a physical standby in place - 11g

    187242.1 covers this wrt 9i. Is this topic covered for 11g specifics somewhere? Can't google/metal ink it anywhere. Thanks.

    The "readme" for Patch 9369783 (which covers the AprilCPU for our 11.1.0.7 HPUX-IA64 environment) includes this short reference to DataGuard:
    If you are using a Data Guard Physical Standby database, you must first install this patch on the primary database before installing the patch on the physical standby database. It is not supported to install this patch on the physical standby database before installing the patch on the primary database. For more information, see My Oracle Support Note 278641.1.
    When checking that note 278641.1 we see that it also appears to only cover 10.2. Although this note has more detail, it is clearly the same procedure as discussed in 813445.1. Therefore, the conclusion that I make is: OPatch works exactly the same with DataGuard in 11g as it did in 10g.
    We will be upgrading our DataGuard enviornment to 11g in one-month. At this point, I am completely expecting our OPatch procedures to remain unchanged from what we have done for years with 9i and 10g. I would also note that the upgrade procedures we have tested (involving DG from 10.2.0.4 to 11.1.0.7) are nearly identical to the above mentioned support notes.
    Hope that helps,
    Stan

  • Switchover/Failover to physical standby

    Hi All,
    I have dataguard configured between Primary and physical standby. I would like to know how can i switch over to physical stanby when the network link is down between primary and physical standby.
    I mean what steps I can follow to make physical standby to primary and primary to staandby even when the network connectivity is down between them.
    Regards,
    Raj

    user12263161 wrote:
    Hi All,
    I have dataguard configured between Primary and physical standby. I would like to know how can i switch over to physical stanby when the network link is down between primary and physical standby. You can only failover in this case, but, if you have flashback on, you can reinstate ex-Primary to standby like described [url http://nikolayivankin.wordpress.com/2012/02/14/dgmgrl-reinstating-ex-primary-to-standby-by-flashback-database-feature/]here

  • Forms are not opening if one of the 2-node 11gR2 DB instance is down

    recently we have upgraded database from 2-node 10gR2 RAC to 2-node 11gR2 instance.
    But, if one of the instance is down, the forms are not picking up the 2nd node and forms are not opening.
    Can someone help me to fix this issue as its PROD.

    Hi,
    Please send the output of below command, run this in your application node.
    "tnsping MMAPPS_BALANCE"
    Your SID_BALANCE TNS-ENTRY should look like below on the application server. Below is just for example of 3 node rac. You should not edit tnsnames.ora file manually, if you configure your context file properly, autoconfig  will generate properly.
    LABDB_BALANCE=
            (DESCRIPTION=
                (ADDRESS_LIST=
                    (LOAD_BALANCE=YES)
                    (FAILOVER=YES)
                    (ADDRESS=(PROTOCOL=tcp)(HOST=node2-vip.hingu.net)(PORT=1522))
                    (ADDRESS=(PROTOCOL=tcp)(HOST=node1-vip.hingu.net)(PORT=1522))
                    (ADDRESS=(PROTOCOL=tcp)(HOST=node3-vip.hingu.net)(PORT=1522))
                (CONNECT_DATA=
                    (SERVICE_NAME=LABDB)

  • Wrong error message when two instances with the same PK are created commited

    I accedently created two instances with the same pk and I got wrong error.
    Took me a while to figure out what's going - I use multiple fields mapped on
    the same column a lot but not in this case
    kodo.util.FatalUserException: User errors were detected when flushing to the
    data store. The getNestedExceptions() method of this exception will return
    an array of the specific errors.
    NestedThrowables:
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 3", "Small Business Review" This
    usually occurs when you map different fields to the same column, but you do
    not keep the values of these fields in synch.
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 5", "JOFOC Approved" This
    usually occurs when you map different fields to the same column, but you do
    not keep the values of these fields in synch.
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 1", "RFC/AP Received" This
    usually occurs when you map different fields to the same column, but you do
    not keep the values of these fields in synch.
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 4", "JOFOC Submitted for Review"
    This usually occurs when you map different fields to the same column, but
    you do not keep the values of these fields in synch.
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 6", "FedBizOpps Announcement"
    This usually occurs when you map different fields to the same column, but
    you do not keep the values of these fields in synch.
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 2", "RFC/AP Approved" This
    usually occurs when you map different fields to the same column, but you do
    not keep the values of these fields in synch.
    at
    kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.jav
    a:721)
    at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.
    java:573)
    at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:63)
    at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:410)
    at peacetech.nci.cs.LoadData.loadLookups(LoadData.java:154)
    at peacetech.nci.cs.LoadData.main(LoadData.java:240)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    ..java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:75)
    NestedThrowablesStackTrace:
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 3", "Small Business Review" This
    usually occurs when you map different fields to the same column, but you do
    not keep the values of these fields in synch.
    at kodo.jdbc.runtime.VRow.setObject(VRow.java:94)
    at kodo.jdbc.sql.AbstractRow.setString(AbstractRow.java:316)
    at kodo.jdbc.meta.ValueFieldMapping.update(ValueFieldMapping.java:195)
    at kodo.jdbc.meta.ColumnFieldMapping.insert(ColumnFieldMapping.java:226)
    at kodo.jdbc.runtime.UpdateManagerImpl.insert(UpdateManagerImpl.java:202)
    at kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:89)
    at kodo.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:480)
    at
    kodo.runtime.DelegatingStoreManager.flush(DelegatingStoreManager.java:154)
    at
    kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.jav
    a:705)
    at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.
    java:573)
    at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:63)
    at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:410)
    at peacetech.nci.cs.LoadData.loadLookups(LoadData.java:154)
    at peacetech.nci.cs.LoadData.main(LoadData.java:240)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    ..java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:75)
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 5", "JOFOC Approved" This
    usually occurs when you map different fields to the same column, but you do
    not keep the values of these fields in synch.
    at kodo.jdbc.runtime.VRow.setObject(VRow.java:94)
    at kodo.jdbc.sql.AbstractRow.setString(AbstractRow.java:316)
    at kodo.jdbc.meta.ValueFieldMapping.update(ValueFieldMapping.java:195)
    at kodo.jdbc.meta.ColumnFieldMapping.insert(ColumnFieldMapping.java:226)
    at kodo.jdbc.runtime.UpdateManagerImpl.insert(UpdateManagerImpl.java:202)
    at kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:89)
    at kodo.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:480)
    at
    kodo.runtime.DelegatingStoreManager.flush(DelegatingStoreManager.java:154)
    at
    kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.jav
    a:705)
    at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.
    java:573)
    at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:63)
    at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:410)
    at peacetech.nci.cs.LoadData.loadLookups(LoadData.java:154)
    at peacetech.nci.cs.LoadData.main(LoadData.java:240)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    ..java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:75)
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 1", "RFC/AP Received" This
    usually occurs when you map different fields to the same column, but you do
    not keep the values of these fields in synch.
    at kodo.jdbc.runtime.VRow.setObject(VRow.java:94)
    at kodo.jdbc.sql.AbstractRow.setString(AbstractRow.java:316)
    at kodo.jdbc.meta.ValueFieldMapping.update(ValueFieldMapping.java:195)
    at kodo.jdbc.meta.ColumnFieldMapping.insert(ColumnFieldMapping.java:226)
    at kodo.jdbc.runtime.UpdateManagerImpl.insert(UpdateManagerImpl.java:202)
    at kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:89)
    at kodo.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:480)
    at
    kodo.runtime.DelegatingStoreManager.flush(DelegatingStoreManager.java:154)
    at
    kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.jav
    a:705)
    at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.
    java:573)
    at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:63)
    at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:410)
    at peacetech.nci.cs.LoadData.loadLookups(LoadData.java:154)
    at peacetech.nci.cs.LoadData.main(LoadData.java:240)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    ..java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:75)
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 4", "JOFOC Submitted for Review"
    This usually occurs when you map different fields to the same column, but
    you do not keep the values of these fields in synch.
    at kodo.jdbc.runtime.VRow.setObject(VRow.java:94)
    at kodo.jdbc.sql.AbstractRow.setString(AbstractRow.java:316)
    at kodo.jdbc.meta.ValueFieldMapping.update(ValueFieldMapping.java:195)
    at kodo.jdbc.meta.ColumnFieldMapping.insert(ColumnFieldMapping.java:226)
    at kodo.jdbc.runtime.UpdateManagerImpl.insert(UpdateManagerImpl.java:202)
    at kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:89)
    at kodo.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:480)
    at
    kodo.runtime.DelegatingStoreManager.flush(DelegatingStoreManager.java:154)
    at
    kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.jav
    a:705)
    at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.
    java:573)
    at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:63)
    at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:410)
    at peacetech.nci.cs.LoadData.loadLookups(LoadData.java:154)
    at peacetech.nci.cs.LoadData.main(LoadData.java:240)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    ..java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:75)
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 6", "FedBizOpps Announcement"
    This usually occurs when you map different fields to the same column, but
    you do not keep the values of these fields in synch.
    at kodo.jdbc.runtime.VRow.setObject(VRow.java:94)
    at kodo.jdbc.sql.AbstractRow.setString(AbstractRow.java:316)
    at kodo.jdbc.meta.ValueFieldMapping.update(ValueFieldMapping.java:195)
    at kodo.jdbc.meta.ColumnFieldMapping.insert(ColumnFieldMapping.java:226)
    at kodo.jdbc.runtime.UpdateManagerImpl.insert(UpdateManagerImpl.java:202)
    at kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:89)
    at kodo.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:480)
    at
    kodo.runtime.DelegatingStoreManager.flush(DelegatingStoreManager.java:154)
    at
    kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.jav
    a:705)
    at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.
    java:573)
    at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:63)
    at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:410)
    at peacetech.nci.cs.LoadData.loadLookups(LoadData.java:154)
    at peacetech.nci.cs.LoadData.main(LoadData.java:240)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    ..java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:75)
    kodo.util.UserException: Attempt to set column "MILESTONE_TYPE.DESCRIPTION"
    to two different values: "ContractMilestone 2", "RFC/AP Approved" This
    usually occurs when you map different fields to the same column, but you do
    not keep the values of these fields in synch.
    at kodo.jdbc.runtime.VRow.setObject(VRow.java:94)
    at kodo.jdbc.sql.AbstractRow.setString(AbstractRow.java:316)
    at kodo.jdbc.meta.ValueFieldMapping.update(ValueFieldMapping.java:195)
    at kodo.jdbc.meta.ColumnFieldMapping.insert(ColumnFieldMapping.java:226)
    at kodo.jdbc.runtime.UpdateManagerImpl.insert(UpdateManagerImpl.java:202)
    at kodo.jdbc.runtime.UpdateManagerImpl.flush(UpdateManagerImpl.java:89)
    at kodo.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:480)
    at
    kodo.runtime.DelegatingStoreManager.flush(DelegatingStoreManager.java:154)
    at
    kodo.runtime.PersistenceManagerImpl.flushInternal(PersistenceManagerImpl.jav
    a:705)
    at
    kodo.runtime.PersistenceManagerImpl.beforeCompletion(PersistenceManagerImpl.
    java:573)
    at kodo.runtime.LocalManagedRuntime.commit(LocalManagedRuntime.java:63)
    at
    kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:410)
    at peacetech.nci.cs.LoadData.loadLookups(LoadData.java:154)
    at peacetech.nci.cs.LoadData.main(LoadData.java:240)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    ..java:25)
    at java.lang.reflect.Method.invoke(Method.java:324)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:75)
    Process terminated with exit code 0

    Thanks for the report. I've recorded the bug here:
    http://bugzilla.solarmetric.com/show_bug.cgi?id=705

  • Can i judge the two runtime instance are the same handle?

    can I judge the two instance is the same handle, just like in C++ to get their address by "&" for compare?
    if yes, how?
    thanks in advance
    Frederick

    Integer a = new Integer(7);
    Integer b = new Integer(7);
    boolean sameValue = (a.equals(b)); // true
    boolean sameInstance = (a == b); // falseJust in case jsalonen's answer was a bit terse.

Maybe you are looking for

  • After Gnome update cpufreq-applet needs privilages all the time.

    Hi I have upgraded Gnome to the latest 2.28 version with pacman. Problem is that I can't change cpufreq with cpufreq-applet without typing the root password. That's ok, but why I have to do it all the time I wan't to change the cpu policy. It is real

  • 0xE8000065 error and problems shutting down

    I recently tried to shut down my iPhone 4 and it is stuck on the shut down screen... I then tried to plug it into my windows computer and the error above showed... If all else fails i am gonna wait for the battery to die, it was at about 40% but i ha

  • Can't Start J2EE Engine Error: Cannot attach to the Messenger

    Hi ALL, I can't make the java start. follow the logs jcontrol.log                                                                                 [Thr 01] INFO: Invalid property value [Debbugable=yes]                                                  

  • Photoshop no longer opens on my Computer

    I just closed out of photoshop. When I am trying to get back on, it takes me to creative suite 6 design standard (liscence pop up).  I tried to put my license number in it and it says contact customer service.  The photoshop does not come up. I did b

  • Help transferring files to Flickr

    I keep getting a message saying I am not connected to the internet, when trying to upload a photo to flickr?  Can anyone help with this?