Cluster 3.0 and CVM

My configuration is:
) the servers are E6500
) I have 2 nodes (A and B)
) I installed Cluster 3.0, Volume Manager 3.2, CVM and Oracle
I need to make a File System. For example /test. I need to work with this File System in parallel.
Domain A )
/test
Domain B)
/test
with the same disks.
Thanks.

Sounds like you need to create a global filesystem (UFS or VXFS doesn't really matter) and then sym link it to the mountpoint or end location of your choice.
The recommendation would be to do this on public disk not private disks as Cluster 3.0 will hit your interconnects pretty hard when going after storage on the other node (if there is no native connection to the storage)
create the volume on public storage
scsetup
choose 4) Device groups and volumes
choose 2) Synchronize volume information for a VxVM device group
(this assumes you have already registered the volume group with the cluster already)
then you can mkfs/newfs
vi /etc/vfstab (on both sides)
add the mountpoint to /global/somemountpoint
add the global option to the mount options stanza
mount the filesystem
when you roll it into the resource group, and restart the resource group it will come online nicely
If I've forgotten a step please forgive me, I'm walking through this in my head versus reading the doco right now... check out docs.sun.com for more detailed info
Jeff

Similar Messages

  • Change Cluster IP address and Hostname

    Hi pals,
    Just to confirm if we have a cluster of PUB and SUB for Unity connection 8.6.2a. The procedure to change IP address or hostname will always start from SUB first then PUB or PUB first then SUB?
    If we start PUB first, i would need to install new license file before I can start with SUB right?
    cheers,
    TA

    Hi TA,
    The steps are different depending if the server(s) are defined by IP address or Hostname
    but all the steps are here
    http://www.cisco.com/en/US/docs/voice_ip_comm/connection/8x/upgrade/guide/8xcucrug050.html
    If you are talking licenses then you must be running in a virtual environment. If so,
    you will have a 30 day grace period to get the license updated:
    License Files and License MACs for Cisco Unity Connection Virtual Machines
    Each license file for a Cisco Unity Connection virtual machine (except for the demonstration license file) is registered to a license MAC value. This value is calculated to look like a MAC address based on the settings listed in Table 42-1, but it is not a real MAC address.
    If you change any of these settings, the existing licenses become invalid, and you must obtain replacement license files that are registered to the calculated license MAC value that is based on the new settings. The old licenses continue to work for a 30-day grace period. During the grace period, you can change the settings back to the original values to make your original licenses valid again. If you need more than 30 days of grace period, change your settings to the original values, then change them back to the new values that you want to use, and you will get another 30- day grace period.
    If you do not reset the 30-day grace period by changing settings back to the original values, then Connection stops running. If you restart the server, Connection starts running again but stops after 24 hours. Each time you restart the server, Connection runs for another 24 hours until you either change the settings back to the original values or you install licenses based on the new license MAC value.
    http://www.cisco.com/en/US/docs/voice_ip_comm/connection/8x/administration/guide/8xcucsag310.html#wp1074759
    Cheers!
    Rob

  • Disaster Recovery in Windows 2003/Cluster, SQL 2000 and R3

    Hi,
    Can someone share experience/knowledge of disaster recovery scenarios in MSCS/SQL Server/SAP. One of our customer has R3/SQL Server2000/Win 2003 (Cluster).
    We would like to evaluate best possible options for the Disaster Recovery which are supported by SAP.
    We have thought about
    1. Log shipping
    2. Standby Database
    3. Restore backup on new cluster
    4. Homogeneous System copy.
    We do not want to go for first two and would like to explore on 3rd and 4th option.
    Any links to documents/blogs will be helpful.
    Thanks,
    Manoj

    > I am confused. Option 3 will be restoring backup
    Yes - but what will you restore? Everything? If you're running on a cluster it's unlikely that both nodes will fail at the same time so there is still one node that can and will run the software, no?
    > and 4 will be sapinst. Isn't it? Are both options supported by SAP?
    Yes.
    > Is there a SAP standard documentation for building cluster from scratch and build SAP system from backup or sapinst for DR?
    The standard installation documentation cover a cluster installation.
    > I am sure there will be installation document if it is a fresh installation. But not sure if there is one for DR.
    If you have a cluster then you have a high availability already. If a node fails, you will "just" reinstall that node and put it back into the cluster.
    What kind of DR scenario are you thinking about?
    Markus

  • 10g database, cluster on aix and jdeveloper

    Are there some known incompatibilities between oracle 10g database with cluster on aix and jdeveloper application (jdev 10g) using dynamic jdbc credentials (jsp, bc4j, adf)?

    My application vendor advises me that my patch level on AIX should be equivalent to that of the following windows patch level.It may not be available becuase most of them are OS specific..If there is patch for Windows behaviour how are you expecting a patch for an Unix version
    Explain your vendor about this buton the other hand if it is a generic issue then you should look for AIX patch
    Edited by: Maran Viswarayar on Jan 18, 2010 5:10 PM

  • Failover Zones / Containers with Sun Cluster Geographic Edition and AVS

    Hi everyone,
    Is the following solution supported/certified by Oracle/Sun? I did find some docs saying it is but cannot find concrete technical information yet...
    * Two sites with a 2-node cluster in each site
    * 2x Failover containers/zones that are part of the two protection groups (1x group for SAP, other group for 3rd party application)
    * Sun Cluster 3.2 and Geographic Edition 3.2 with Availability Suite for SYNC/ASYNC replication over TCP/IP between the two sites
    The Zones and their application need to be able to failover between the two sites.
    Thanks!
    Wim Olivier

    Fritz,
    Obviously, my colleagues and I, in the Geo Cluster group build and test Geo clusters all the time :-)
    We have certainly built and tested Oracle (non-RAC) configurations on AVS. One issue you do have, unfortunately, is that of zones plus AVS (see my Blueprint for more details http://wikis.sun.com/display/BluePrints/Using+Solaris+Cluster+and+Sun+Cluster+Geographic+Edition). Consequently, you can't built the configuration you described. The alternative is to sacrifice zones for now and wait for the fixes to RG affinities (no idea on the schedule for this feature) or find another way to do this - probably hand crafted.
    If you follow the OHAC pages (http://www.opensolaris.org/os/community/ha-clusters/) and look at the endorsed projects you'll see that there is a Script Based Plug-in on the way (for OHACGE) that I'm writing. So, if you are interested in playing with OHACGE source or the SCXGE binaries, you might see that appear at some point. Of course, these aren't supported solutions though.
    Regards,
    Tim
    ---

  • Dependency between Sun Cluster 2.2 and Sun hardware ?

    I am evaluating Sun Cluster 2.2 for the usage as clustering software within our project.
    Can anybody say me, if there is a dependency between Sun Cluster 2.2 and Sun hardware ?
    Can I use Sun Cluster 2.2 with other hardware platforms running Sun Solaris 7 and Veritas Volume Manager ?

    I have had this discussion with fellow cluster admins, and we think a port of cluster 2.2 could be done to Intel running Solaris. And here is the BUT, BUT SUN does not support anything except a SPARC based cluster. So currently there is definately a depedancy bewteen the 2.2 software and the hardware.
    Hope this helps
    Heath

  • Physical table list against Cluster Table CDPOS and PCDPOS

    Hello experts,
    For function customized requirement, we need to know the physical table
    list against Cluster Table CDPOS and PCDPOS and EDID4, just like
    Cluster Table BSEG contain with six physical tables
    BSAD/BSID/BSAS/BSIS/BSAK/BSIK. Also we want to know if there is any
    general way to find out the physical table list for any cluster table.
    My question is:
    1. How can I find all the transparent table for Cluster Table CDPOS?
    just like Cluster Table BSEG has transparent tables of
    BSAD/BSID/BSAS/BSIS/BSAK/BSIK.
    2. How can I find all the transparent table for Cluster Table PCDPOS?
    3. How can I find all the transparent table for Cluster Table EDID4?
    4. Additionally,I want to know if there is any
    general way to find out all the transparent tables for an specific
    cluster table.
    Many thanks.

    Hello,
    simply look in transaction SE11.
    Example:
    1.  SE11 -> Table CDPOS - Display. On Tab 'Delivery and Maintenance' you'll find Pool/Cluster 'CDCLS'.
    2. SE11 -> Table CDCLS -> Display. On next screen position on CDCLS-> Where-used-list -> Tables -> you'll find tables CDPOS and PCDPOS.
    Same thing with EDID4    -> EDI40 ...
    Regards Wolfgang

  • Extract cluster element names and values automatically

    Dear Labview forum,
    I would like to extract cluster element names and values automatically
    without prior knowledge of the cluster element size or contents.
    I am able to extract the names but have some trouble with the values.  (see attached VI)
    I wish to write each the cluster element name and value in a string, and then write the string to a new line in a file.
    Regards,
    Jamie
    Using Labview version 8.0
    Attachments:
    extract cluster element names and values automatically.vi ‏14 KB

    This can become arbitrarily complex, because a cluster can contain many types of data, even other clusters, arrays, typedefs, etc. So getting a "value" of an element is not as easy as you might think. There is a reason you get a variant.
    If all cluster elements are simple numerics you can use "variant to data" with the correct type. The attached shows a few possibilities. Modify as needed.
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    extract_cluster_element_names_and_values_automaticallyMOD.vi ‏23 KB

  • ASA 8.0 VPN cluster with WEBVPN and Certificates

    I'm looking for advice from anyone who has implemented or tested ASA 8.0 in a VPN cluster using WebVPN and the AnyConnect client. I have a stand alone ASA configured with a public certificate for SSL as vpn.xxxx.org, which works fine.
    According to the config docs for 8.0, you can use a FQDN redirect for the cluster so that certificates match when a user is sent to another ASA.
    Has anyone done this? It looks like each box will need 2 certificates, the first being vpn.xxxx.org and the second being vpn1.xxxx.org or vpn2.xxxx.org depending on whether this is ASA1 or ASA2. I also need DNS forward and reverse entries, which is no problem.
    I'm assuming the client gets presented the appropriate certificate based on the http GET.
    Has anyone experienced any issues with this? Things to look out for migrating to a cluster? Any issues with replicating the configuration and certificate to a second ASA?
    Example: Assuming ASA1 is the current virtual cluster master and is also vpn1.xxxx.org. ASA 2 is vpn2.xxxx.org. A user browses to vpn.xxxx.org and terminates to ASA1, the current virtual master. ASA1 should present the vpn.xxxx.org certificate. ASA1 determines that it has the lowest load and redirects the user to vpn1.xxxx.org to terminate the WebVPN session. The user should now be presented a certificate that matches vpn1.xxxx.org. ASA2 should also have the certificate for vpn.xxxx.org in case it becomes the cluster master during a failure scenario.
    Thanks,
    Mark

    There is a bug associated with this issue: CSCsj38269. Apparently it is fixed in the iterim release 8.0.2.11, but when I upgraded to 8.0.3 this morning the bug is still there.
    Here are the details:
    Symptom:
    ========
    ASA 8.0 load balancing cluster with WEBVPN.
    When connecting using a web browser to the load balancing ip address or FQDN,
    the certifcate send to the browser is NOT the certificate from the trustpoint
    assigned for the load balancing using the
    "ssl trust-point vpnlb-ip" command.
    Instead its using the ssl trust-point certificate assigned to the interface.
    This will generate a certificate warning on the browser as the URL entered
    on the browser does not match the CN (common name) in the certificate.
    Other than the warning, there is no functional impact if the end user
    continues by accepting to proceed to the warning message.
    Condition:
    =========
    webvpn with load balancing is used
    Workaround:
    ===========
    1) downgrade to latest 7.2.2 interim (7.2.2.8 or later)
    Warning: configs are not backward compatible.
    2) upgrade to 8.0.2 interim (8.0.2.11 or later)

  • Cluster Name Resolution and failover

    We are running a cluster consisting of two managed servers and one
              admin server on Solaris using WebLogic 6.1 SP3. The application is
              composed of EJB's only. Each managed server has been assigned a DNS
              name (host names weblogic-cluster-ma & weblogic-cluster-mb). A
              cluster address (DNS name weblogic-cluster) has been setup to
              round-robin amongst the two IP's assocaited with the managed server
              that are participating in the server.
              If I do a nslookup on the same cluster name successively, I get two
              different IP addresses.
              The behavior that I am observing is as follows:
              1. When all instances participating in the cluster are running
              everything is fine. The clients are able to connect to the machine
              using the cluster name while doing the JNDI lookup.
              2. When one of the servers participating in the cluster is down, and
              the cluster name is used to access it (the round-robin DNS name),
              depending on which machine is down, all calls either get through or
              none of them get through.
              I do not want to use the following cluster aware syntax to access the
              cluster
              t3://weblogic-cluster-ma,weblogic-cluster-mb:7001
              Instead I would like to use the cluster name
              t3://weblogic-cluster:7001 and have transparent failover.
              It would appear to me that if I have DNS return me back all the IP's
              associated with the cluster then my problem would be solved.
              for example:
              nslookup weblogic-cluster.
              216.33.240.47, 216.33.240.12
              Is this possible? How have others solved this problem.
              Thanks for your reply!
              

    Aaravali,
              What you describe should work. However, could you clarify statement
              #2, specifically, "all calls either get through or none of them get
              through." Do you mean that the client either works or it simply fails?
              Are you failing on creating the initial context?
              Regards,
              Simon
              Developer Relations Engineer
              BEA SupportAaravali Karavali wrote:
              > We are running a cluster consisting of two managed servers and one
              > admin server on Solaris using WebLogic 6.1 SP3. The application is
              > composed of EJB's only. Each managed server has been assigned a DNS
              > name (host names weblogic-cluster-ma & weblogic-cluster-mb). A
              > cluster address (DNS name weblogic-cluster) has been setup to
              > round-robin amongst the two IP's assocaited with the managed server
              > that are participating in the server.
              >
              > If I do a nslookup on the same cluster name successively, I get two
              > different IP addresses.
              >
              > The behavior that I am observing is as follows:
              >
              > 1. When all instances participating in the cluster are running
              > everything is fine. The clients are able to connect to the machine
              > using the cluster name while doing the JNDI lookup.
              > 2. When one of the servers participating in the cluster is down, and
              > the cluster name is used to access it (the round-robin DNS name),
              > depending on which machine is down, all calls either get through or
              > none of them get through.
              >
              > I do not want to use the following cluster aware syntax to access the
              > cluster
              >
              > t3://weblogic-cluster-ma,weblogic-cluster-mb:7001
              >
              > Instead I would like to use the cluster name
              > t3://weblogic-cluster:7001 and have transparent failover.
              >
              > It would appear to me that if I have DNS return me back all the IP's
              > associated with the cluster then my problem would be solved.
              >
              > for example:
              >
              > nslookup weblogic-cluster.
              >
              > 216.33.240.47, 216.33.240.12
              >
              > Is this possible? How have others solved this problem.
              >
              > Thanks for your reply!
              

  • Is there a way of passing a mixed cluster of numbers and booleans to teststand

    Hi,
    I have a labview VI that contains an output cluster containing both numeric and boolean results and would like to pass this to Teststand. At the moment I have coverted all my boolean results to  '1/0'  so that I can create a numeric array and can quite easily pass this to Teststand (using multiple numeric limit test). 
    Is there a way to pass mixed results to Teststand and write in the limits (example PASS and GT 5V) or do I have to stick with what I have?
    Chris

    Which test step type to use depends on what you have to analyze - a boolean condition? String? Number(s)? I can't tell you because I don't know what's in your cluster. If you click on the plus sign next to the parameter name "output cluster" you will see all paramters and their types, which are passed from the VI to TestStand.
    You can either create a variable for the whole cluster, or you can assign all or just some values from within the cluster to variables.
    The name of the variable (Locals.xxxxxxx... or FileGlobals.xxxxx...) ist what you type in the value field. You can also choose the variable from the expression browser by clicking on the f(x) button.
    Are you new to TestStand, do you know how to work with variables in TS?
    Maybe the attached picture gives you an example, there I am assigning the values from VI output "VoltageOutputArray" to the TS variable locals.VoltageMeasurement.
    This variable ist used again on the tab "Data Source" as the Data Source Expression.
    Regards,
    gedi
    Attachments:
    stepsettings.jpg ‏89 KB

  • How can I create a RAWpic-file with header(cluster of 8elements) and 2D Array?

    I have created a cluster with 8 "I32"elements (Header). I also got picture information into a 2D array (i converted information from a ROI into a 2Darray("Image pixels(I16)")
    Now I want to create a RAW picture file (first the header information followed by the picture information.)
    I think I have to convert the Header-cluster into an array with "cluster to array". But even this is not possible(Info:"you cannot combine 2 different clusters"). Then I have to combine the 2 arrays with "insert into array"??? But the header array would be a 1Darray if the "cluster to array conversation" works and the picture information is 2D.
    Can anyone help me please?

    I think this is actually pretty straightforward, barring any byte-order issues. Attached is an image that demonstrates what I think jotthape wants to do.
    The key is to let multiple Write File nodes do the work for you. There's no need to combine all the data into a single structure for the write, and avoiding that makes things easier in this case, because you don't have to worry about some of the data being I32 and the rest being I16.
    Give this a shot,
    John
    P.S. It does seem likely that you will run into byte-order problems if you want to read your custom RAW file in using some other piece of Windows software. You can check the forum and the NI site for info on byte order, then ask a new question if you're still having a problem.
    Attachments:
    writeRaw.png ‏8 KB

  • Access is denied messages in Win2012 R2 Failover Cluster validation report and CSV entering a paused state

    Been having some issues with nodes basically dropping out of clusters config.
    Error showing was
    "Cluster Shared Volume 'Volume1' ('Data') has entered a paused state because of '(c000020c)'. All I/O will temporarily be queued until a path to the volume is reestablished."
    All nodes (Poweredge 420) connected a Dell MD3200 shared SAS storage.
    Nodes point to Virtual 2012 R2 DC's
    Upon running validation with just two nodes, get the same errors over and over again.
    Bemused!
    List Software Updates
    Description: List software updates that have been applied on each node.
    An error occurred while executing the test.
    An error occurred while getting information about the software updates installed on the nodes.
    One or more errors occurred.
    Creating an instance of the COM component with CLSID {4142DD5D-3472-4370-8641-DE7856431FB0} from the IClassFactory failed due to the following error: 80070005 Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)).
    and
    List Disks
    Description: List all disks visible to one or more nodes. If a subset of disks is specified for validation, list only disks in the subset.
    An error occurred while executing the test.
    Storage cannot be validated at this time. Node 'zhyperv2.KISLNET.LOCAL' could not be initialized for validation testing. Possible causes for this are that another validation test is being run from another management client, or a previous validation test was
    unexpectedly terminated. If a previous validation test was unexpectedly terminated, the best corrective action is to restart the node and try again.
    Access is denied
    The event viewer on one of the hosts shows
    Cluster node 'zhyperv2' lost communication with cluster node 'zhyperv1'.  Network communication was reestablished. This could be due to communication temporarily being blocked by a firewall or connection security policy update. If the problem persists
    and network communication are not reestablished, the cluster service on one or more nodes will stop.  If that happens, run the Validate a Configuration wizard to check your network configuration. Additionally, check for hardware or software errors related
    to the network adapters on this node, and check for failures in any other network components to which the node is connected such as hubs, switches, or bridges.
    The Cluster service is shutting down because quorum was lost. This could be due to the loss of network connectivity between some or all nodes in the cluster, or a failover of the witness disk.
    Run the Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapter. Also check for failures in any other network components to which the node is connected
    such as hubs, switches, or bridges.
    Only other warning is because the 4 nic ports in each node server are teamed on one ip address split over two switches - I am not concernd about this and could if required split then pairs, I think this is a red herring????

    Hi,
    Such events happen because of the following reason:
    1- Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks not enabled on all network interfaces. Check this KB article: http://support.microsoft.com/kb/2008795
    . Please make sure these two protocols are enabled on all cluster networks
    2- Network connectivity issue can cause this event as well. Please make sure the network cabling/Cards/Switches are correctly configured and working as expected
    3- Connectivity issue with the storage can also cause this event. Please make sure all the nodes are connected to storage. Check HBA/Cabling connectivity to SAN. Make sure
    that the SAN drivers are up-to-date.
    4- Antivirus may interrupt network communication and cause this failure. Please exclude CSV volumes from being scanned by AV: http://social.technet.microsoft.com/wiki/contents/articles/953.microsoft-anti-virus-exclusion-list.aspx
    5- Disable TCP Chimney related settings on all cluster nodes. http://support.microsoft.com/kb/951037
    6- Please check the Network Binding Order (http://social.technet.microsoft.com/Forums/windowsserver/en-US/2535c73a-a347-4152-be7a-ea7b24159520/hyperv-r2-csv-cluster-recommended-binding-order?forum=windowsserver2008r2highavailability)
    7- Firewall Rules For All Inbound and Outbound For Cluster and Hyper-V for all the Profiles
    8- Update NIC Driver/Firmware.
    9- Check Compatibility of the NIC with Windows Server 2012R2
    10- Set-NetAdapterRss - Resources and Tools for IT Professionals | TechNet : http://technet.microsoft.com/en-us/library/jj130863.aspx
    11- Check the Following Article http://social.technet.microsoft.com/Forums/windowsserver/en-US/e06fede9-931c-4dee-8379-4fd985e20f0a/hypervvmswitch-eventid-106
    12- General Updates to be applied on the nodes :
    Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 update rollup: November 2013 : http://support.microsoft.com/kb/2887595
    Windows 8.1 and Windows Server 2012 R2 General Availability Update Rollup :
    http://support.microsoft.com/kb/2883200
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Cache.cluster.multicast.ip and Installing Oracle Identity Analytics

    After following the configuration steps outlined below, and deploying the war to Glassfish, I get an error starting RBACX that I think must be related to the cache.cluster.multicast.ip property settings below (error log follows the configuration step outlined next):
    7.Make the following changes if there are multiple instances of Oracle Identity Analytics, standalone or clustered, on the same subnet.
    a.Navigate to rbacx_staging/WEB-INF directory.
    b.In a text editor, open application-context.xml, find bean id commManager, and examine the constructor-arg value.
    c.Set the constructor-arg value with a unique instance name, e.g. value="SRM-Instance-1".
    d.In a text editor, open search-context.xml, find bean id searchConfiguration, and examine the constructor-arg value.
    •The deployment is a standalone, constructor-arg defaults to a value of 0, which is specified as value="0".
    e.Navigate to rbacx_staging/WEB-INF/classes directory and do the following:
    i.In a text editor, open oscache.properties (located in the rbacx_staging/WEB-INF/classes directory), and find the cache.cluster.multicast.ip property.
    ii.Uncomment cache.cluster.multicast.ip by removing the # at the start of the line. Each Oracle Identity Analytics instance requires a unique cache.cluster.multicast.ip value.
    rbacx.log
    10:28:41,357 INFO [Config] OSCache: Getting properties from URL file:/C:/Program%20Files/glassfish-v2.1/domains/domain1/applications/j2ee-modules/rbacx/WEB-INF/classes/oscache.properties for the default configuration
    10:28:41,359 INFO [Config] OSCache: Properties read {cache.cluster.multicast.ip=231.12.21.135, cache.event.listeners=com.opensymphony.oscache.plugins.clustersupport.JavaGroupsBroadcastingListener,com.opensymphony.oscache.extra.CacheMapAccessEventListenerImpl}
    10:28:41,360 INFO [GeneralCacheAdministrator] Constructed GeneralCacheAdministrator()
    10:28:41,360 INFO [GeneralCacheAdministrator] Creating new cache
    10:28:41,377 INFO [AbstractBroadcastingListener] AbstractBroadcastingListener registered
    10:28:41,379 INFO [JavaGroupsBroadcastingListener] Starting a new JavaGroups broadcasting listener with properties=UDP(mcast_addr=231.12.21.135;mcast_port=45566;ip_ttl=32;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):PING(timeout=2000;num_initial_members=3):MERGE2(min_interval=5000;max_interval=10000):FD_SOCK:VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK(gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):UNICAST(timeout=300,600,1200,2400):pbcast.STABLE(desired_avg_gossip=20000):FRAG(frag_size=8096;down_thread=false;up_thread=false):pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=true)
    10:28:41,654 WARN [Configurator] NAKACK property max_xmit_size was deprecated and is ignored
    10:28:41,674 WARN [Configurator] FRAG property down_thread was deprecated and is ignored
    10:28:41,674 WARN [Configurator] FRAG property up_thread was deprecated and is ignored
    10:28:41,688 WARN [Configurator] GMS property join_retry_timeout was deprecated and is ignored
    10:28:41,715 WARN [UDP] receive buffer of socket java.net.DatagramSocket@3734fc1a was set to 64KB, but the OS only allocated 64KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
    10:28:41,715 WARN [UDP] receive buffer of socket java.net.MulticastSocket@77932b46 was set to 80KB, but the OS only allocated 80KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
    10:28:41,723 WARN [UDP] failed to join /224.0.75.75:7500 on net5: java.net.SocketException: Unrecognized Windows Sockets error: 0: no Inet4Address associated with interface
    10:28:41,723 WARN [UDP] failed to join /224.0.75.75:7500 on eth4: java.net.SocketException: Unrecognized Windows Sockets error: 0: no Inet4Address associated with interface
    10:28:41,724 WARN [UDP] failed to join /224.0.75.75:7500 on net9: java.net.SocketException: Unrecognized Windows Sockets error: 0: no Inet4Address associated with interface
    10:28:41,724 WARN [UDP] failed to join /224.0.75.75:7500 on net10: java.net.SocketException: Unrecognized Windows Sockets error: 0: no Inet4Address associated with interface
    10:28:41,724 WARN [UDP] failed to join /224.0.75.75:7500 on net11: java.net.SocketException: Unrecognized Windows Sockets error: 0: no Inet4Address associated with interface
    10:28:43,768 INFO [JavaGroupsBroadcastingListener] JavaGroups clustering support started successfully
    10:29:14,732 INFO [ContextLifecycleListener] Sun Role Manager (build: 5.0.2.20100125_69_7083) Started
    10:29:14,953 WARN [SearchIndexStartupRunner] Indexes are not consistent.
    10:29:14,953 WARN [SearchIndexStartupRunner] We will rebuild indexes now.
    10:29:19,694 ERROR [ContextLoader] Context initialization failed
    org.springframework.jdbc.BadSqlGrammarException: SqlMapClient operation; bad SQL grammar []; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
    --- The error occurred in com/vaau/rbacx/dao/ibatis/maps/BaseAttributeWrapper.xml.
    --- The error occurred while applying a parameter map.
    --- Check the BaseAttributeWrapper.findAttributeValuesMaxIdWithNullHash-InlineParameterMap.
    --- Check the statement (query failed).
    --- Cause: java.sql.SQLSyntaxErrorException: ORA-00904: "HASH_VALUE": invalid identifier
         at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:220)
         at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:212)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForObject(SqlMapClientTemplate.java:271)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForObject(SqlMapClientTemplate.java:265)
         at com.vaau.rbacx.dao.ibatis.SqlMapBaseAttributeWrapperDao.findAttributeValuesMaxIdWithNullHash(SqlMapBaseAttributeWrapperDao.java:280)
         at com.vaau.rbacx.core.metadata.manager.MetadataManagerImpl.findAttributeValuesMaxIdWithNullHash(MetadataManagerImpl.java:115)
         at com.vaau.rbacx.system.bootstrap.AttributeValueHashCheckStarupRunner.runStartupItems(AttributeValueHashCheckStarupRunner.java:36)
         at com.vaau.rbacx.system.web.bootstrap.SrmContextLoaderListener.onApplicationEvent(SrmContextLoaderListener.java:61)
         at org.springframework.context.event.SimpleApplicationEventMulticaster$1.run(SimpleApplicationEventMulticaster.java:78)
         at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:49)
         at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:76)
         at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:274)
         at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:736)
         at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:383)
         at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255)
         at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199)
         at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45)
         at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4632)
         at org.apache.catalina.core.StandardContext.start(StandardContext.java:5312)
         at com.sun.enterprise.web.WebModule.start(WebModule.java:353)
         at com.sun.enterprise.web.LifecycleStarter.doRun(LifecycleStarter.java:58)
         at com.sun.appserv.management.util.misc.RunnableBase.runSync(RunnableBase.java:304)
         at com.sun.appserv.management.util.misc.RunnableBase.run(RunnableBase.java:341)
         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: com.ibatis.common.jdbc.exception.NestedSQLException:
    --- The error occurred in com/vaau/rbacx/dao/ibatis/maps/BaseAttributeWrapper.xml.
    --- The error occurred while applying a parameter map.
    --- Check the BaseAttributeWrapper.findAttributeValuesMaxIdWithNullHash-InlineParameterMap.
    --- Check the statement (query failed).
    --- Cause: java.sql.SQLSyntaxErrorException: ORA-00904: "HASH_VALUE": invalid identifier
         at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java:201)
         at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryForObject(MappedStatement.java:120)
         at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:518)
         at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForObject(SqlMapExecutorDelegate.java:493)
         at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForObject(SqlMapSessionImpl.java:106)
         at org.springframework.orm.ibatis.SqlMapClientTemplate$1.doInSqlMapClient(SqlMapClientTemplate.java:273)
         at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:209)
         ... 27 more
    Caused by: java.sql.SQLSyntaxErrorException: ORA-00904: "HASH_VALUE": invalid identifier
         at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:91)
         at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
         at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
         at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:194)
         at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:791)
         at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:866)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
         at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3488)
         at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1374)
         at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.execute(NewProxyPreparedStatement.java:989)
         at com.ibatis.sqlmap.engine.execution.SqlExecutor.executeQuery(SqlExecutor.java:185)
         at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.sqlExecuteQuery(MappedStatement.java:221)
         at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java:189)
         ... 33 more
    Any thoughts/suggestions appreciated.
    Gary Sharpe
    UC Davis

    Did you import all the relevant sql scripts for your database? You have to import all the migrate/upgrade scripts as well. Once you have imported all the required sql scripts, it will start properly. You can ignore the cluster warnings.

  • Oracle Clustre, Oracle Cluster with RAC and Oracle 10g

    Is there a difference between Oracle Cluster and Oracle Cluster with RAC? Please explain. Do existing database codes run unmodified in Cluster or Cluster with RAC environment? What needs to be modified to make existing SQL codes RAC-aware. How to achieve 'all automatic' in case of failure and resubmission of Queries from failed instance to a running instance?
    In 10g environment, do we need to consider licensing of RAC as a separate product? What are additional features one derives in 10g that is not in Cluster +RAC?
    Your comments and pointers to comparison study and pictorial clarification will be very helpful.

    Oracle cluster like failsafe before or Veritas Cluster or other vendor's cluster is meant for HA (high availability) purpose. Which 2 nodes or more can see a shared disk with 1 active node. Whenever this active node failed through heartbeat other machine will know and will take the database over from there.
    Oracle RAC is more for HA and load balance. In Oracle RAC 2 or more nodes are accessing the database at the same time so it spread load across all these nodes.
    I believe Oracle 10g RAC still need seperate license for it. But you need to call Oracle or check the production document to verify it.
    Oracle 10g besides improvement in RAC. It's main improvement is on the build in management of the database itself. It can monitored and selftune itself to much furthur level then before and give DBA much more information to determine the cause of the problem as well. Plus improvement on lots of utility as well like RMAN , data pump etc... I don't want to get into too much detail on this you can check on their 10g new features for more detail view.
    Hope this help. :)

Maybe you are looking for