Campus cluster with storage replication

Hi all..
we are planning to implement a campus cluster with storage replication over a distance of 4km using remote mirror feature of sun storagetek 6140.
Primary storage ( the one where quorum resides ) and replicated secondary storage will be in separate sites interconnected with dedicated single mode fiber.
The nodes of the cluster will be using primary storage and the data from primary will be replicated to the second storage using remote mirror.
Now in case the primary storage failed completely how the cluster can continue operation with the second storage? what is the procedure ? how does the initial configuration look like?
Regards..
S

Hi,
a high level overview with a list of restrictions can be found here:
http://docs.sun.com/app/docs/doc/819-2971/6n57mi28m?q=TrueCopy&a=view
More details how to set this up can be found at:
http://docs.sun.com/app/docs/doc/819-2971/6n57mi28r?a=view
The basic setup would be to have 2 nodes, 2 storageboxes, TrueCopy between the 2 boxes but no crosscabling. The HAStoragePlus resource being part of a service resource group would use a device that had been "cldevice replicate"ed by the administrator. so that the "same" device could be used on both nodes.
I am not sure how a failover is triggered if the primary storage box failed. But due to the "replication" mentioned above, SC knows how to reconfigure the replication in the case of a failover.
Unfortunately, due to lack of HDS storage in my own lab, I was not able to test this setup; so this is all theory.
Regards
Hartmut
PS: Keep in mind, that the only replication technology integrated into SC today is HDS TrueCopy. If you're thinking of doing manual failovers anyway, you could have a look at the Sun Cluster Geographic Edition which is more a disaster recovery like configuration that combines 2 or more clusters and is able to failover resource groups including replication; this product already supports more replication technologies and will even more in the future. Have a look at http://docsview.sfbay.sun.com/app/docs/coll/1191.3

Similar Messages

  • Is Sun Storage 7000 Storage replication adapter compatible with 2540 series

    We are evaluating VMWare Site Recovery Manager 4.0. Is Sun Storage 7000 Storage replication adapter compatible with 2540 Sun Storage series?

    I would say the short answer would be no. The 2540 is a LSI based device, the SRM adapter for that is written by LSI. The 7000
    series array is based on OpenSolaris and ZFS, quite a different beast.

  • CVU, SRVCTL, CRSCTL Works with Varitas Cluster and storage

    Are these utilities work if i am using Varitas Cluster and storage?

    Yes, it would....the only doubt I have is about CUV on detecting the clustered storage. If it fails to detect for some reason (I can't think of why it would fail...but anyways), use the "-s <storageID_List> parameter....to see if it detects.
    cluvfy stage -post hwos -n <node_list> -s <storageID_list>
    Thanks
    Chandra

  • Campus cluster for services inside LDOMs

    Hi,
    Is it possible to set up campus cluster using Hitachi Truecopy replication between two LDOMs running on pair of T4 servers ?
    I'm thinking about 4-5 LDOMs on each T4 with Solaris Cluster frameworks installed in LDOM and connected in campus cluster configuration running NFS on ZFS formatted LUNs. To be able to failover I need HORCM to be able access command device on storage which should be raw LUN.
    What actually concerns me is if it possible to pass raw LUNs to LDOMs necessary for HORCM operations ? We do not have T4 server with attached storage yet and I can not check if ldm add-io can operate on LUN level and not on HBA level and I yet to
    Solaris Cluster in control domain and failover of LDOMs does not provide what I'm trying to achieve - HA for service running inside LDOM, whether it's NFS or Oracle.
    Thanks !

    Storm,
    OK, I'm not sure I understood this statement:
    "We can have same LUNs AA and BB zoned to node B and they also will be ReadWrite."
    Suppose your zpool is called mypool. Further, suppose that that your LUN AA appears as /dev/did/dsk/d7 and LUN BB as /dev/did/dsk/d8. You would create the zpool as follows:
    # zpool create mypool \
    mirror /dev/did/dsk/d7 /dev/did/dsk/d8
    You would then create a resource group and resource to control this.
    # clrg create -n nodeA,nodeB my-storage-rg
    # clrt register SUNW.HAStoragePlus
    # clrs create -g my-storage-rg -t SUNW.HAStoragePlus \
    -p zpools=mypool my-storage-hasp-rs
    # clrg online -emM my-storage-rg
    That would bring your zpool online on nodeA. The zpool would be read/write accessible from nodeA only. If you then switched the resource group to nodeB using:
    # clrg switch -n nodeB my-storage-rg
    Then you could read/write to the zpool from nodeB.
    There is no way, either with mirrored zpools and HAStoragePlus or TrueCopy/UR & HORCM that you can read/write to a zpool from both nodes concurrently.
    If someone forcible imported the zpool onto the other node, then you are very likely to suffer permanent zpool corruption. But then, if someone dd'ed over your root disk, that would be fatal too ;-)
    With regards to replication via TC/UR and ZFS, the important feature for preserving data consistency is the UR journal. As far as I remember, these are only applicable to Universal Replicator when set up for asynchronous replication. I don't think UR supports journals with sync replication. However, since this isn't an Oracle product, I would strongly recommend you check my statements with Hitachi - things may have changes since I last spoke to them.
    Hope that helps,
    Tim
    ---

  • 3 Node hyper-V 2012 R2 Failover Clustering with Storage spaces on one of the Hyper-V hosts

    Hi,
    We have 3x Dell R720s with 5x 600GB 3.5 15K SAS and 128 GB RAM each. Was wondering if I could setup a Fail-over Hyper-V 2012 R2 Clustering with these 3 with the shared storage for the CSV being provided by one of the Hyper-V hosts with storage spaces installed
    (Is storage spaces supported on Hyper-V?) Or I can use a 2-Node Failover clustering and the third one as a standalone Hyper-V or Server 2012 R2 with Hyper-V and storage spaces.  
    Each Server comes with QP 1G and a DP10G nics so that I can dedicate the 10G nics for iSCSI
    Dont have a SAN or a 10G switch so it would be a crossover cable connection between the servers.
    Most of the VMs would be Non-HA. Exchange 2010, Sharepoint 2010 and SQL Server 2008 R2 would be the only VMS running as HA-VMs. CSV for the Hyper-V Failover cluster would be provided by the storage spaces.

    I thought I was tying to do just that with 8x600 GB RAID-10 using the H/W RAID controller (on the 3rd Server) and creating CSVs out of that space so as to provide better storage performance for the HA-VMs.
    1. Storage Server : 8x 600GB RAID-10 (For CSVs to house all HA-VMs running on the other 2 Servers) It may also run some local VMs that have very little disk I/O
    2. Hyper-V-1 : Will act has primary HYPER-V hopst for 2x Exchange and Database Server HA-VMs (the VMDXs would be stored on the Storage Servers CSVs on top of the 8x600GB RAID-10). May also run some non-HA VMs using the local 2x600 GB in RAID-1 
    3. Hyper-V-2 : Will act as a Hyper-V host when the above HA-VMs fail-over to this one (when HYPER-V-1 is down for any reason). May also run some non-HA VMs using the local 2x600 GB in RAID-1 
    The single point of failure for the HA-VMs (non HA-VMs are non-HA so its OK if they are down for some time) is the Storage Server. The Exchange servers here are DAG peers to the Exchange Servers at the head office so in case the storage server mainboard
    goes down (disk failure is mitigated using RAID, other components such as RAM, mainboard may still go but their % failure is relatively low) the local exchange servers would be down but exchange clients will still be able to do their email related tasks using
    the HO Exchange servers.
    Also they are under 4hr mission critical support including entire server replacement within the 4 hour period. 
    If you're OK with your shared storage being a single point of failure then sure you can proceed the way you've listed. However you'll still route all VM-related I/O over Ethernet which is obviously slower then running VMs from DAS (with or without virtual SAN
    LUN-to-LUN replication layer) as DAS has higher bandwidth and smaller latency. Also with your scenario you exclude one host from your hypervisor cluster so running VMs on a pair of hosts instead of three would give you much worse performance and resilience:
    with 1 from 3 physical hosts lost cluster would be still operable and with 1 from 2 lost all VMs booted on a single node could give you inadequate performance. So make sure your hosts would be insanely underprovisioned as every single node should be able to
    handle ALL workload in case of disaster. Good luck and happy clustering :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • SQL Cluster with Mirroring

    Hello All...........Consider following:
    We need to build a two-2 node failover cluster in following manner:
    OS:  Windows 2012
    SQL: SQL 2012
    Node_A                                     Node_B
    =====                                    =====
    SAN_A_Storage                          SAN_B_Storage (Both servers with their own Storage, FSW to be used for Quorum)
    Instance_A                                 Instance_B
    Each server would have its own Instance that is Node_A would have Instance_A and Node_B would have Instance_B.  Mirroring would be configured vice versa that is Mirror copy of Node_A would be on Node_B and Node_B's mirror copy would be on Node_A. 
    In other words, both instances would be active at the same time with mirror copy on the other server.
    Is this doable? 

    Thanks for the reply.
    Actually, I have to implement the solution based on a design that has been handed over to me.  But, since it made me confused, so, I turned here for guidance.  So, design-wise, I cannot change anything.
    I fully understand that HA can be achieved directly through a two-2 node cluster.  I also, get your point that in common mirroring scenarios, either a cluster is mirrored to a DR site cluster or a single server, but in my scenario, the cluster is the
    same at HA and DR sites.
    Following are some clarifications with regards to environment:
    Design is such that it involves two-2 sites A and B where both sites have their own Storage (no storage replication).  FSW will be present at a third-3 site for quorum.  The cluster would be a stretched cluster (Fiber connectivity between
    sites).  Each site node would run instances of their site with its mirror copy being placed on the other site's node.
    Mirroring was chosen due to supportability limitation otherwise AOAG is in place for such scenarios.  But, since I have not encountered this situation before, so, I am just verifying if this workable or not?  As I have a few more questions after
    this, but just wanted to make sure that it is supported in SQL 2012.
    THanks.

  • Stretch (campus) cluster and ASM bug 6656309

    Hi all
    we have applied 6656309 patch in our Test environment and we think
    to apply it also in Production soon.
    If anyone:
    - has a stretch (campus) 10.2.3 RAC cluster ,
    - is using ASM with storages in (at least) two different buildings,
    - is aware of bug 6656309 (and related 6360485, 6747864 for example),
    - plans to install the patch (it has been released for a few platforms),
    - wants to share his thoughts here,
    he is really welcome.
    Thanks
    Oscar Armanini
    Please note that the patch applies to db executable(s), not ASM ones, but I prefer to post it in the ASM forum because the final result of the bug is that ASM does not declare the disks missing when one storage system get lost.

    10gR2 has some limitations/issues in case of extended cluster(Read about re-silvering process in 10g). We put Extended cluster on Solaris, but we didn't use Failgroup due to same reason.
    We used Sun cluster to mirror disk at OS level
    we implemented 11g extended cluster on windows and it works like a charm. Of course it has it's own limitations (Placing 3rd voting issue in windows)
    anyways, for extended cluster 11g is best option so far, if possible.

  • Persistent Store Problems for MYSQL Enhanced Cluster With OpenMQ 4.4

    I am trying to implement an enhanced cluster with failover. I have edited the config files for each broker instance for a persistent store. I have appended the following to each of the config.properties files:
    imq.brokerid=myclusterinstanceINSTANCE1 # I substitute INSTANCE2 for INSTANCE1 for broker #2
    imq.persist.store=jdbc
    imq.persist.jdbc.dbVendor=mysql
    imq.persist.jdbc.mysql.property.url=jdbc:mysql://xxx.xxx.xxx.xx:3306/test
    imq.persist.jdbc.mysql.user=user1
    imq.persist.jdbc.mysql.needpassword=true
    imq.persist.jdbc.mysql.password=mypass
    imq.cluster.ha=true
    imq.cluster.clusterid=mycluster
    imq.cluster.brokerlist=xxx.xxx.xxx.x:37676,yyy.yyy.yyy.y:37676
    I then create the persistence storage with "imqdbmgr create tbl". When I view the data in the tables it creates, I have one row. Under Store_Version, I have 410. Under LOCK_ID, it has NULL. When I go to start the brokers with imqbrokerd, I get the following error:
    ERROR [B3198]: Error initializing cluster manager:
    com.sun.messaging.jmq.jmsserver.util.BrokerException: [B4239]: Failed to load persistent store version from database table MQVER41Cmycluster
    at com.sun.messaging.jmq.jmsserver.persist.jdbc.VersionDAOImpl.getStoreVersion(VersionDAOImpl.java:310)
    at com.sun.messaging.jmq.jmsserver.persist.jdbc.DBTool.updateStoreVersion410IfNecessary(DBTool.java:350)
    at com.sun.messaging.jmq.jmsserver.persist.jdbc.JDBCStore.checkStore(JDBCStore.java:3599)
    at com.sun.messaging.jmq.jmsserver.persist.jdbc.JDBCStore.<init>(JDBCStore.java:127)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at java.lang.Class.newInstance0(Class.java:355)
    at java.lang.Class.newInstance(Class.java:308)
    at com.sun.messaging.jmq.jmsserver.persist.StoreManager.getStore(StoreManager.java:157)
    at com.sun.messaging.jmq.jmsserver.Globals.getStore(Globals.java:967)
    at com.sun.messaging.jmq.jmsserver.cluster.ha.HAClusterManagerImpl.initialize(HAClusterManagerImpl.java:181)
    at com.sun.messaging.jmq.jmsserver.Globals.initClusterManager(Globals.java:903)
    at com.sun.messaging.jmq.jmsserver.Broker._start(Broker.java:777)
    at com.sun.messaging.jmq.jmsserver.Broker.start(Broker.java:410)
    at com.sun.messaging.jmq.jmsserver.Broker.main(Broker.java:1971)
    Caused by: java.lang.NullPointerException
    at com.mysql.jdbc.ResultSetImpl.findColumn(ResultSetImpl.java:1103)
    at com.mysql.jdbc.ResultSetImpl.getInt(ResultSetImpl.java:2777)
    at com.sun.messaging.jmq.jmsserver.persist.jdbc.VersionDAOImpl.getStoreVersion(VersionDAOImpl.java:298)
    ... 16 more
    I believe this error is attributed to the NULL value under LOCK_ID. I think that the value under LOCK_ID should be the name of the broker from the config file (even though I specified them in the config files). Any ideas?? THANKS!

    Just some pointers -- maybe this will be of use:
    If you haven't already read it, please take a look at the [ MySQL setup guide|https://mq.dev.java.net/OpenMQ_MySQLCluster_Setup_Guide.html] .
    We recommend using NDB Data-store of MySQL Cluster, though this isn't an absolute requirement. Due to some issues we have found with earlier versions, we recommend using MySQL Cluster, 7.0.9 or better (the current version is 7.0.16, or 7.1.5). Either of these would contain Connector/J.
    I'd also recommend using the latest version -- MQ 4.4update2 (just in case you happen to have an older copy). There were many minor improvements in the integration with MySQL from the original 4.4 release, to update 2. This is linked at the MQ download page: [https://mq.dev.java.net/downloads.html]

  • Oracle RAC + Clusterware and another Cluster with Clusterware for SAP

    Hi,
    I have some questions about implementation of Oracle RAC and Clusterware with SAP
    For exemple, an architecture with 4 servers ( 2 real and 2 vritual ).
    I would like to know if i can do this
    2 servers for the first cluster.
    First cluster is with Clusterware and Oracle RAC
    This is for all the SAP Oracle databases environment
    I think there is no problem here.
    Now, with 2 others servers il would like to make another cluster (with also clusterware ) for SAP Central services (SCS) and enque replication server (ERS)
    Because all architecture is for only one SAP environment with separate services.
    1 for Database (cluster 1)
    1 for Central services ( cluster 2, virtual machine )
    1 for Dialogue Instance (no cluster)
    To be clear, the second cluster is to make HA of central services SAP (SCS and ERS )
    My question 2 are :
    Is it a good job to do this ? or there is anything wrong ?
    Do i have to install antoher clusterware for this 2 servers or i have to make anything with the existing clusterware + oracle RAC ??
    Thank you very much for you help
    Edited by: user12395221 on 29 déc. 2009 15:36

    Hi Givre,
    have you checked: Providing High Availability for SAP Resources (http://www.oracle.com/technology/products/database/clusterware/pdf/sap-availability-on-rac-twp.pdf) available on otn.oracle.com/clusterware? Not being an SAP expert myself, I still think, this paper describes the configuration - at least partially - that you are trying to set up.
    Just an idea. Thanks,
    Markus

  • Guest two node Failover Cluster with SAS HBA and Windows 2012 R1

    Hi all, i have two IBM x3560 brand new servers with V3700 IBM Storage. The Servers are connected to the storage through four SAS HBA adabters (two HBA's on each server). I want to create a two node guest Fileserver Failover Cluster. I can present the
    LUN's to the guest machines, but when i 'm running the cluster creation wizard it can't see any disk. I can see the disks on disk management console. Is there any way to achive this (the cluster creation) using my SAS HBA presented
    disks, or i have to use iSCSI to present the disks to my cluster?
    Thank you in advance, George
      

    Hi all, i have two IBM x3560 brand new servers with V3700 IBM Storage. The Servers are connected to the storage through four SAS HBA adabters (two HBA's on each server). I want to create a two node guest Fileserver Failover Cluster. I can present the
    LUN's to the guest machines, but when i 'm running the cluster creation wizard it can't see any disk. I can see the disks on disk management console. Is there any way to achive this (the cluster creation) using my SAS HBA presented
    disks, or i have to use iSCSI to present the disks to my cluster?
    Thank you in advance, George
    1) Update to R2 and use shared VHDX which is a better way to go. See:
    Shared VHDX
    http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
    Clustering
    Options
    http://blogs.technet.com/b/josebda/archive/2013/07/31/windows-server-2012-r2-storage-step-by-step-with-storage-spaces-smb-scale-out-and-shared-vhdx-virtual.aspx
    2) If you want to stick with non-R2 (which is a BAD idea b/c tons of reasons) you can spawn an iSCSI target on top of your storage, make it clustered and make it provide LUs to your guest VMs. See:
    iSCSI Target in Failover
    http://technet.microsoft.com/en-us/library/gg232632(v=ws.10).aspx
    iSCSI Target Failover Step-by-Step
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    3) Use third-party software providing clustered storage (active-active) out-of-box. 
    I would strongly recommend to upgrade to R2 and use shared VHDX.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Issue to setup local Coherence cluster with WKA (well-known-address)

    Hello - I have started local coherence cluster using WKA with single node,but when I start CacheFactory (coherence.cmd) with same configuration it throws following error message.
    Any help is appricicated.
    JVM startup Arrgument
    -Dtangosol.coherence.override=cluster.xml
    cluster.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <coherence xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config http://xmlns.oracle.com/coherence/coherence-operational-config/1.1/coherence-operational-config.xsd">
    <cluster-config>
      <unicast-listener>
       <well-known-addresses>
        <socket-address id="1">
         <address>171.193.103.25</address>
         <port>8088</port>
        </socket-address>
       </well-known-addresses>
          </unicast-listener>
    </cluster-config>
    <logging-config>
      <destination>stdout</destination>
      <severity-level>9</severity-level>
    </logging-config>
    </coherence>
    Cluster startup Message
    WellKnownAddressList(Size=1,
      WKA{Address=171.193.103.25, Port=8088}
    MasterMemberSet(
      ThisMember=Member(Id=1, Timestamp=2013-10-24 11:07:18.603, Address=171.193.103.25:8088, MachineId=9041, Location=site:,machine:FD4C9EF534D5D,process:16704, Role=CoherenceServer)
      OldestMember=Member(Id=1, Timestamp=2013-10-24 11:07:18.603, Address=171.193.103.25:8088, MachineId=9041, Location=site:,machine:FD4C9EF534D5D,process:16704, Role=CoherenceServer)
      ActualMemberSet=MemberSet(Size=1
        Member(Id=1, Timestamp=2013-10-24 11:07:18.603, Address=171.193.103.25:8088, MachineId=9041, Location=site:,machine:FD4C9EF534D5D,process:16704, Role=CoherenceServer)
      MemberId|ServiceVersion|ServiceJoined|MemberState
        1|3.7.1|2013-10-24 11:07:48.843|JOINED
      RecycleMillis=1200000
      RecycleSet=MemberSet(Size=0
    TcpRing{Connections=[]}
    IpMonitor{AddressListSize=0}
    2013-10-24 11:07:48.869/31.794 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2013-10-24 11:07:49.058/31.983 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache, member=1): Service DistributedCache joined the cluster with senior service member 1
    2013-10-24 11:07:49.077/32.002 Oracle Coherence GE 3.7.1.0 <D6> (thread=DistributedCache, member=1): Service DistributedCache: sending PartitionConfig ConfigSync to all
    2013-10-24 11:07:49.121/32.046 Oracle Coherence GE 3.7.1.0 <D5> (thread=ReplicatedCache, member=1): Service ReplicatedCache joined the cluster with senior service member 1
    2013-10-24 11:07:49.128/32.053 Oracle Coherence GE 3.7.1.0 <D5> (thread=OptimisticCache, member=1): Service OptimisticCache joined the cluster with senior service member 1
    2013-10-24 11:07:49.131/32.056 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:InvocationService, member=1): Service InvocationService joined the cluster with senior service member 1
    2013-10-24 11:07:49.132/32.057 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=1):
    Services
      ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=1}
      InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
      PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
      ReplicatedCache{Name=ReplicatedCache, State=(SERVICE_STARTED), Id=3, Version=3.0, OldestMemberId=1}
      Optimistic{Name=OptimisticCache, State=(SERVICE_STARTED), Id=4, Version=3.0, OldestMemberId=1}
      InvocationService{Name=InvocationService, State=(SERVICE_STARTED), Id=5, Version=3.1, OldestMemberId=1}
    Started DefaultCacheServer...
    Error Message from CacheFactory
    C:\Users\Zk5rjg8>C:\coherence37\bin\coherence.cmd
    ** Starting storage disabled console **
    java version "1.6.0_51"
    Java(TM) SE Runtime Environment (build 1.6.0_51-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 20.51-b01, mixed mode)
    2013-10-24 11:13:22.851/0.392 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/C:/coherence37/lib/coherence.jar!/tangosol-coherence.xml"
    2013-10-24 11:13:22.920/0.462 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/coherence37/cluster.xml"
    2013-10-24 11:13:22.924/0.465 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    2013-10-24 11:13:22.924/0.465 Oracle Coherence 3.7.1.0 <D6> (thread=main, member=n/a): Loaded edition data from "jar:file:/C:/coherence37/lib/coherence.jar!/coherence-grid.xml"
    Oracle Coherence Version 3.7.1.0 Build 27797
    Grid Edition: Development mode
    Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
    2013-10-24 11:13:23.722/1.263 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /171.193.103.25:8090 using SystemSocketProvider
    2013-10-24 11:13:54.001/31.542 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Cluster, member=n/a): This Member(Id=0, Timestamp=2013-10-24 11:13:23.762, Address=171.193.103.25:8090, MachineId=9041, Location=site:,machine:FD4C9EF534D5D,process:17192, Role=CoherenceConsole) has been attempting to joi
    2013-10-24 11:13:54.001/31.542 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Cluster, member=n/a): Delaying formation of a new cluster; waiting for well-known nodes to respond
    2013-10-24 11:14:24.402/61.943 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Cluster, member=n/a): Delaying formation of a new cluster; waiting for well-known nodes to respond
    2013-10-24 11:14:54.805/92.346 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Cluster, member=n/a): Delaying formation of a new cluster; waiting for well-known nodes to respond
    2013-10-24 11:15:25.207/122.748 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Cluster, member=n/a): Delaying formation of a new cluster; waiting for well-known nodes to respond
    2013-10-24 11:15:55.610/153.151 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Cluster, member=n/a): Delaying formation of a new cluster; waiting for well-known nodes to respond
    2013-10-24 11:16:26.012/183.553 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Cluster, member=n/a): Delaying formation of a new cluster; waiting for well-known nodes to respond
    2013-10-24 11:16:56.414/213.955 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Cluster, member=n/a): Delaying formation of a new cluster; waiting for well-known nodes to respond
    2013-10-24 11:17:26.817/244.358 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Cluster, member=n/a): Delaying formation of a new cluster; waiting for well-known nodes to respond
    2013-10-24 11:17:57.219/274.760 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Cluster, member=n/a): Delaying formation of a new cluster; waiting for well-known nodes to respond
    2013-10-24 11:17:58.271/275.812 Oracle Coherence GE 3.7.1.0 <Error> (thread=Cluster, member=n/a): Detected soft timeout) of {WrapperGuardable Guard{Daemon=IpMonitor} Service=ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_ANNOUNCE), Id=0, Version=3.7.1}}
    2013-10-24 11:17:58.273/275.814 Oracle Coherence GE 3.7.1.0 <Error> (thread=Recovery Thread, member=n/a): Full Thread Dump
    Thread[PacketListener1,8,Cluster]
            java.net.PlainDatagramSocketImpl.receive0(Native Method)
            java.net.PlainDatagramSocketImpl.receive(Unknown Source)
            java.net.DatagramSocket.receive(Unknown Source)
            com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:22)
            com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:1)
            com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:20)
            com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
            java.lang.Thread.run(Unknown Source)
    Thread[PacketReceiver,7,Cluster]
            java.lang.Object.wait(Native Method)
            com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
            com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketReceiver.onWait(PacketReceiver.CDB:2)
            com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
            java.lang.Thread.run(Unknown Source)
    Thread[Attach Listener,5,system]
    Thread[PacketPublisher,6,Cluster]
            java.lang.Object.wait(Native Method)
            com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
            com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.onWait(PacketPublisher.CDB:2)
            com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
            java.lang.Thread.run(Unknown Source)
    Thread[Cluster|STATE_ANNOUNCE|Member(Id=0, Timestamp=2013-10-24 11:13:23.762, Address=171.193.103.25:8090, MachineId=9041, Location=site:,machine:FD4C9EF534D5D,process:17192, Role=CoherenceConsole),5,Cluster]
            sun.nio.ch.WindowsSelectorImpl$SubSelector.poll0(Native Method)
            sun.nio.ch.WindowsSelectorImpl$SubSelector.poll(Unknown Source)
            sun.nio.ch.WindowsSelectorImpl$SubSelector.access$400(Unknown Source)
            sun.nio.ch.WindowsSelectorImpl.doSelect(Unknown Source)
            sun.nio.ch.SelectorImpl.lockAndDoSelect(Unknown Source)
            sun.nio.ch.SelectorImpl.select(Unknown Source)
            com.tangosol.coherence.component.net.TcpRing.select(TcpRing.CDB:11)
            com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService.onWait(ClusterService.CDB:6)
            com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
            java.lang.Thread.run(Unknown Source)
    Thread[Reference Handler,10,system]
            java.lang.Object.wait(Native Method)
            java.lang.Object.wait(Object.java:485)
            java.lang.ref.Reference$ReferenceHandler.run(Unknown Source)
    Thread[Finalizer,8,system]
            java.lang.Object.wait(Native Method)
            java.lang.ref.ReferenceQueue.remove(Unknown Source)
            java.lang.ref.ReferenceQueue.remove(Unknown Source)
            java.lang.ref.Finalizer$FinalizerThread.run(Unknown Source)
    Thread[Signal Dispatcher,9,system]
    Thread[PacketSpeaker,8,Cluster]
            java.lang.Object.wait(Native Method)
            com.tangosol.coherence.component.util.queue.ConcurrentQueue.waitForEntry(ConcurrentQueue.CDB:16)
            com.tangosol.coherence.component.util.queue.ConcurrentQueue.remove(ConcurrentQueue.CDB:7)
            com.tangosol.coherence.component.util.Queue.remove(Queue.CDB:1)
            com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketSpeaker.onNotify(PacketSpeaker.CDB:21)
            com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
            java.lang.Thread.run(Unknown Source)
    Thread[Logger@1457155060 3.7.1.0,3,main]
            java.lang.Object.wait(Native Method)
            com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
            com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
            java.lang.Thread.run(Unknown Source)
    Thread[PacketListener1P,8,Cluster]
            java.net.PlainDatagramSocketImpl.receive0(Native Method)
            java.net.PlainDatagramSocketImpl.receive(Unknown Source)
            java.net.DatagramSocket.receive(Unknown Source)
            com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:22)
            com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:1)
            com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:20)
            com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
            java.lang.Thread.run(Unknown Source)
    Thread[main,5,main]
            java.lang.Object.wait(Native Method)
            com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:18)
            com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:6)
            com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:56)
            com.tangosol.coherence.component.net.Cluster.start(Cluster.CDB:11)
            com.tangosol.coherence.component.util.SafeCluster.startCluster(SafeCluster.CDB:3)
            com.tangosol.coherence.component.util.SafeCluster.restartCluster(SafeCluster.CDB:10)
            com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluster(SafeCluster.CDB:26)
            com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.CDB:2)
            com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:427)
            com.tangosol.coherence.component.application.console.Coherence.run(Coherence.CDB:25)
            com.tangosol.coherence.component.application.console.Coherence.main(Coherence.CDB:3)
            sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
            sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
            java.lang.reflect.Method.invoke(Unknown Source)
            com.tangosol.net.CacheFactory.main(CacheFactory.java:827)
    Thread[Recovery Thread,5,Cluster]
            java.lang.Thread.dumpThreads(Native Method)
            java.lang.Thread.getAllStackTraces(Unknown Source)
            com.tangosol.net.GuardSupport.logStackTraces(GuardSupport.java:810)
            com.tangosol.internal.net.cluster.DefaultServiceFailurePolicy.onGuardableRecovery(DefaultServiceFailurePolicy.java:44)
            com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$WrapperGuardable.recover(Grid.CDB:1)
            com.tangosol.net.GuardSupport$Context$1.run(GuardSupport.java:653)
            java.lang.Thread.run(Unknown Source)
    2013-10-24 11:17:58.273/275.814 Oracle Coherence GE 3.7.1.0 <Warning> (thread=Recovery Thread, member=n/a): Attempting recovery of Guard{Daemon=IpMonitor}
    Exception in thread "main" 2013-10-24 11:18:24.025/301.566 Oracle Coherence GE 3.7.1.0 <Error> (thread=main, member=n/a): Error while starting cluster: com.tangosol.net.RequestTimeoutException: Timeout during service start: ServiceInfo(Id=0, Name=Cluster, Type=Cluster
      MemberSet=MasterMemberSet(
        ThisMember=null
        OldestMember=null
        ActualMemberSet=MemberSet(Size=0
        MemberId|ServiceVersion|ServiceJoined|MemberState
        RecycleMillis=1200000
        RecycleSet=MemberSet(Size=0
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onStartupTimeout(Grid.CDB:3)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:28)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:6)
            at com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:56)
            at com.tangosol.coherence.component.net.Cluster.start(Cluster.CDB:11)
            at com.tangosol.coherence.component.util.SafeCluster.startCluster(SafeCluster.CDB:3)
            at com.tangosol.coherence.component.util.SafeCluster.restartCluster(SafeCluster.CDB:10)
            at com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluster(SafeCluster.CDB:26)
            at com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.CDB:2)
            at com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:427)
            at com.tangosol.coherence.component.application.console.Coherence.run(Coherence.CDB:25)
            at com.tangosol.coherence.component.application.console.Coherence.main(Coherence.CDB:3)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
            at java.lang.reflect.Method.invoke(Unknown Source)
            at com.tangosol.net.CacheFactory.main(CacheFactory.java:827)
    java.lang.reflect.InvocationTargetException
    2013-10-24 11:18:24.025/301.566 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Service Cluster left the cluster at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
            at java.lang.reflect.Method.invoke(Unknown Source)
            at com.tangosol.net.CacheFactory.main(CacheFactory.java:827)
    Caused by: com.tangosol.net.RequestTimeoutException: Timeout during service start: ServiceInfo(Id=0, Name=Cluster, Type=Cluster
      MemberSet=MasterMemberSet(
        ThisMember=null
        OldestMember=null
        ActualMemberSet=MemberSet(Size=0
        MemberId|ServiceVersion|ServiceJoined|MemberState
        RecycleMillis=1200000
        RecycleSet=MemberSet(Size=0
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onStartupTimeout(Grid.CDB:3)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:28)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:6)
            at com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:56)
            at com.tangosol.coherence.component.net.Cluster.start(Cluster.CDB:11)
            at com.tangosol.coherence.component.util.SafeCluster.startCluster(SafeCluster.CDB:3)
            at com.tangosol.coherence.component.util.SafeCluster.restartCluster(SafeCluster.CDB:10)
            at com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluster(SafeCluster.CDB:26)
            at com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.CDB:2)
            at com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:427)
            at com.tangosol.coherence.component.application.console.Coherence.run(Coherence.CDB:25)
            at com.tangosol.coherence.component.application.console.Coherence.main(Coherence.CDB:3)
            ... 5 more
    C:\Users\Zk5rjg8>

    Hi SajeevPynadath
    1
    First start the server process  with "cache-server.cmd"
    2
    After that you can start another server or client process,  the "coherence.cmd" script is to start a client process to join the cluster .
    3
    Then now you have 2 processes , and your cluster.xml will look like this :
    <socket-address id="serverprocess">
         <address>171.193.103.25</address>
         <port>8088</port>
        </socket-address>
    <socket-address id="clienprocess">
         <address>171.193.103.25</address>
         <port>8089</port>
        </socket-address>
    4
    Before start each process remember put in java command line :
    for server
    -Dtangosol.coherence.localhost=171.193.103.25 -Dtangosol.coherence.localport=8088
    for client
    -Dtangosol.coherence.localhost=171.193.103.25 -Dtangosol.coherence.localport=8089
    regards,
    Leo_TA

  • Can I create different Coherence nodes in the same cluster with defferent?

    Can I create different Coherence nodes in the same cluster with defferent cache-config.xml file ?
    Can a cache be distributed in these deffirent nodes?

    Yes. You can create different Coherence nodes in the same cluster with defferent cache-config.xml files as long as you use the same tangosol-coherence.xml file and the same tangosol-coherence-override.xml file. But you cannot store the cache data in the different nodes (started with different cache-config file). In other word, a node only create caches in their own's modes which are started with the same cache-config.xml file.
    See the following demo:
    I start a cache server using the cache config file examples-cache-server.xml. Then I start a storage-disabled cache console (cache client) using the cache config file coherence-cache-config.xml. Both of them using the same tangosol-coherence.xml file and the same tangosol-coherence-override.xml file.
    The cache server uses a cache service PartitionedPofCache. But the client side is using the Distributedcache service. The cluster address is same 224.3.5.2.
    The cluster name is also samme. They know each other.
    D:\coherence\lib>D:\examples\java\bin\run-cache-server.cmd
    D:\coherence\lib>D:\examples\java\bin\run-cache-server.cmd
    The system cannot find the file D:\coherence.
    The system cannot find the file C:\Oracle\Middleware\jdk160_11.
    2009-12-22 12:09:31.400/4.987 Oracle Coherence 3.5.2/463 <Info> (thread=main, member=n/a): Loaded operational configurat
    ion from resource "jar:file:/D:/coherence/lib/coherence.jar!/tangosol-coherence.xml"
    2009-12-22 12:09:31.450/5.037 Oracle Coherence 3.5.2/463 <Info> (thread=main, member=n/a): Loaded operational overrides
    from resource "jar:file:/D:/coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml"
    2009-12-22 12:09:31.470/5.057 Oracle Coherence 3.5.2/463 <D5> (thread=main, member=n/a): Optional configuration override
    "/tangosol-coherence-override.xml" is not specified
    2009-12-22 12:09:31.540/5.127 Oracle Coherence 3.5.2/463 <D5> (thread=main, member=n/a): Optional configuration override
    "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.5.2/463
    Grid Edition: Development mode
    Copyright (c) 2000, 2009, Oracle and/or its affiliates. All rights reserved.
    2009-12-22 12:09:33.864/7.451 Oracle Coherence GE 3.5.2/463 <Info> (thread=main, member=n/a): Loaded cache configuration
    from "file:/D:/examples/java/resource/config/examples-cache-config.xml"
    2009-12-22 12:09:39.983/13.570 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=n/a): Service Cluster joined t
    he cluster with senior service member n/a
    2009-12-22 12:09:43.187/16.774 Oracle Coherence GE 3.5.2/463 <Info> (thread=Cluster, member=n/a): Created a new cluster
    "cluster:0xD3FB" with Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Locatio
    n=process:144, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=1, SocketCount=1) UID=0xC0A8085000
    000125B75D888C60501F98
    2009-12-22 12:09:43.508/17.095 Oracle Coherence GE 3.5.2/463 <D5> (thread=Invocation:Management, member=1): Service Mana
    gement joined the cluster with senior service member 1
    2009-12-22 12:09:46.582/20.169 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache:PartitionedPofCache, member=1
    ): Service PartitionedPofCache joined the cluster with senior service member 1
    2009-12-22 12:09:46.672/20.259 Oracle Coherence GE 3.5.2/463 <Info> (thread=DistributedCache:PartitionedPofCache, member
    =1): Loading POF configuration from resource "file:/D:/examples/java/resource/config/examples-pof-config.xml"
    2009-12-22 12:09:46.702/20.289 Oracle Coherence GE 3.5.2/463 <Info> (thread=DistributedCache:PartitionedPofCache, member
    =1): Loading POF configuration from resource "jar:file:/D:/coherence/lib/coherence.jar!/coherence-pof-config.xml"
    2009-12-22 12:09:47.734/21.321 Oracle Coherence GE 3.5.2/463 <Info> (thread=main, member=1): Started DefaultCacheServer.
    SafeCluster: Name=cluster:0xD3FB
    Group{Address=224.3.5.2, Port=35463, TTL=4}
    MasterMemberSet
      ThisMember=Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=process
    :144, Role=CoherenceServer)
      OldestMember=Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=proce
    ss:144, Role=CoherenceServer)
      ActualMemberSet=MemberSet(Size=1, BitSetCount=2
        Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=process:144, Rol
    e=CoherenceServer)
      RecycleMillis=120000
      RecycleSet=MemberSet(Size=0, BitSetCount=0
    Services
      TcpRing{TcpSocketAccepter{State=STATE_OPEN, ServerSocket=192.168.8.80:8088}, Connections=[]}
      ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.5, OldestMemberId=1}
      InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
      DistributedCache{Name=PartitionedPofCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCo
    unt=1, AssignedPartitions=257, BackupPartitions=0}
    2009-12-22 12:12:29.737/183.324 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=20
    09-12-22 12:12:29.541, Address=192.168.8.80:8089, MachineId=24656, Location=process:1188, Role=CoherenceConsole) joined
    Cluster with senior member 1
    2009-12-22 12:12:30.498/184.085 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Member 2 joined Service M
    anagement with senior member 1
    2009-12-22 12:12:31.860/185.447 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): TcpRing: connecting to me
    mber 2 using TcpSocket{State=STATE_OPEN, Socket=Socket[addr=/192.168.8.80,port=8089,localport=2463]}
    2009-12-22 12:12:51.338/204.925 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=1): Member 2 joined Service D
    istributedCache with senior member 2The following command starts a cache client.
    D:\coherence\bin>coherence.cmd
    D:\coherence\bin>coherence.cmd
    ** Starting storage disabled console **
    java version "1.6.0_11"
    Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
    Java HotSpot(TM) Server VM (build 11.0-b16, mixed mode)
    2009-12-22 12:12:21.054/3.425 Oracle Coherence 3.5.2/463 <Info> (thread=main, member=n/a): Loaded operational configurat
    ion from resource "jar:file:/D:/coherence/lib/coherence.jar!/tangosol-coherence.xml"
    2009-12-22 12:12:21.355/3.726 Oracle Coherence 3.5.2/463 <Info> (thread=main, member=n/a): Loaded operational overrides
    from resource "jar:file:/D:/coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml"
    2009-12-22 12:12:21.365/3.736 Oracle Coherence 3.5.2/463 <D5> (thread=main, member=n/a): Optional configuration override
    "/tangosol-coherence-override.xml" is not specified
    2009-12-22 12:12:21.415/3.786 Oracle Coherence 3.5.2/463 <D5> (thread=main, member=n/a): Optional configuration override
    "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.5.2/463
    Grid Edition: Development mode
    Copyright (c) 2000, 2009, Oracle and/or its affiliates. All rights reserved.
    2009-12-22 12:12:29.316/11.687 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=n/a): Service Cluster joined t
    he cluster with senior service member n/a
    2009-12-22 12:12:29.356/11.727 Oracle Coherence GE 3.5.2/463 <Info> (thread=Cluster, member=n/a): Failed to satisfy the
    variance: allowed=16, actual=20
    2009-12-22 12:12:29.356/11.727 Oracle Coherence GE 3.5.2/463 <Info> (thread=Cluster, member=n/a): Increasing allowable v
    ariance to 17
    2009-12-22 12:12:29.807/12.178 Oracle Coherence GE 3.5.2/463 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Time
    stamp=2009-12-22 12:12:29.541, Address=192.168.8.80:8089, MachineId=24656, Location=process:1188, Role=CoherenceConsole,
    Edition=Grid Edition, Mode=Development, CpuCount=1, SocketCount=1) joined cluster "cluster:0xD3FB" with senior Member(I
    d=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=process:144, Role=CoherenceS
    erver, Edition=Grid Edition, Mode=Development, CpuCount=1, SocketCount=1)
    2009-12-22 12:12:29.977/12.348 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=n/a): Member 1 joined Service
    Management with senior member 1
    2009-12-22 12:12:29.977/12.348 Oracle Coherence GE 3.5.2/463 <D5> (thread=Cluster, member=n/a): Member 1 joined Service
    PartitionedPofCache with senior member 1
    2009-12-22 12:12:30.578/12.949 Oracle Coherence GE 3.5.2/463 <D5> (thread=Invocation:Management, member=2): Service Mana
    gement joined the cluster with senior service member 1
    SafeCluster: Name=cluster:0xD3FB
    Group{Address=224.3.5.2, Port=35463, TTL=4}
    MasterMemberSet
      ThisMember=Member(Id=2, Timestamp=2009-12-22 12:12:29.541, Address=192.168.8.80:8089, MachineId=24656, Location=proces
    s:1188, Role=CoherenceConsole)
      OldestMember=Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=proce
    ss:144, Role=CoherenceServer)
      ActualMemberSet=MemberSet(Size=2, BitSetCount=2
        Member(Id=1, Timestamp=2009-12-22 12:09:38.06, Address=192.168.8.80:8088, MachineId=24656, Location=process:144, Rol
    e=CoherenceServer)
        Member(Id=2, Timestamp=2009-12-22 12:12:29.541, Address=192.168.8.80:8089, MachineId=24656, Location=process:1188, R
    ole=CoherenceConsole)
      RecycleMillis=120000
      RecycleSet=MemberSet(Size=0, BitSetCount=0
    Services
      TcpRing{TcpSocketAccepter{State=STATE_OPEN, ServerSocket=192.168.8.80:8089}, Connections=[]}
      ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.5, OldestMemberId=1}
      InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
    Map (?):
    2009-12-22 12:12:49.505/31.906 Oracle Coherence GE 3.5.2/463 <Info> (thread=main, member=2): Loaded cache configuration
    from "jar:file:/D:/coherence/lib/coherence.jar!/coherence-cache-config.xml"
    2009-12-22 12:12:51.358/33.729 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Service Distribut
    edCache joined the cluster with senior service member 2
    <distributed-scheme>
      <!--
      To use POF serialization for this partitioned service,
      uncomment the following section
      <serializer>
      <class-
      name>com.tangosol.io.pof.ConfigurablePofContext</class-
      name>
      </serializer>
      -->
      <scheme-name>example-distributed</scheme-name>
      <service-name>DistributedCache</service-name>
      <backing-map-scheme>
        <local-scheme>
          <scheme-ref>example-binary-backing-map</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>But when I try to store data into cache from the client side, it report error message: it's staorage-disabled. It shows that this cache console cannot store the data in the existing cache server because then using different cache config files.
    Map (ca3): cache ca2
    <distributed-scheme>
      <!--
      To use POF serialization for this partitioned service,
      uncomment the following section
      <serializer>
      <class-
      name>com.tangosol.io.pof.ConfigurablePofContext</class-
      name>
      </serializer>
      -->
      <scheme-name>example-distributed</scheme-name>
      <service-name>DistributedCache</service-name>
      <backing-map-scheme>
        <local-scheme>
          <scheme-ref>example-binary-backing-map</scheme-ref>
        </local-scheme>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>
    Map (ca2): put 1 one
    2009-12-22 14:00:04.999/6467.370 Oracle Coherence GE 3.5.2/463 <Error> (thread=main, member=2):
    java.lang.RuntimeException: Storage is not configured
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.onMissing
    Storage(DistributedCache.CDB:9)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureReq
    uestTarget(DistributedCache.CDB:34)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(Distr
    ibutedCache.CDB:22)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.put(Distr
    ibutedCache.CDB:1)
            at com.tangosol.util.ConverterCollections$ConverterMap.put(ConverterCollections.java:1541)
            at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.put(Distrib
    utedCache.CDB:1)
            at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
            at com.tangosol.coherence.component.application.console.Coherence.processCommand(Coherence.CDB:581)
            at com.tangosol.coherence.component.application.console.Coherence.run(Coherence.CDB:39)
            at com.tangosol.coherence.component.application.console.Coherence.main(Coherence.CDB:3)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at com.tangosol.net.CacheFactory.main(CacheFactory.java:1400)

  • How to make 2 Node File Cluster with SAS Disks

    Hello,
    I cant fine any detailed, specific information and answer on this question
    I have 2 Servers, Say HP, with 4 SAS disks each,  1 for OS (2012R2Data) and 3 available.
    I want to create fault tolerant SMB 3.0 share for my Hyper-v nodes, to make hyper-v FT cluster afterwards.
    So, I see inr equirements that SAS disks will work, but during creating cluster, I can not manage Cluster to see disks from both servers (total 6 free units)
    If it is made via ISCSi target, both servers have access to that targets and cluster sees all available through iSCSI disks, but how to mnake SAS disk available to other server?
    Is it possible what I need with this configuration?  is it a option to make each server iSCSI target and initiator? (but it would be way complicated and slow, I think).
    so, what I have misunderstood, how to make FT file Cluster with 2 servers with SAS drives?
    Any info about this SPECIFIC config would be welcomed, or any general step-by-step guides  (please, do not link me guides with other config, like iscsi or with additional servers, I have seen a lot of them :()
    thanks

    Hello,
    I cant fine any detailed, specific information and answer on this question
    I have 2 Servers, Say HP, with 4 SAS disks each,  1 for OS (2012R2Data) and 3 available.
    I want to create fault tolerant SMB 3.0 share for my Hyper-v nodes, to make hyper-v FT cluster afterwards.
    So, I see inr equirements that SAS disks will work, but during creating cluster, I can not manage Cluster to see disks from both servers (total 6 free units)
    If it is made via ISCSi target, both servers have access to that targets and cluster sees all available through iSCSI disks, but how to mnake SAS disk available to other server?
    Is it possible what I need with this configuration?  is it a option to make each server iSCSI target and initiator? (but it would be way complicated and slow, I think).
    so, what I have misunderstood, how to make FT file Cluster with 2 servers with SAS drives?
    Any info about this SPECIFIC config would be welcomed, or any general step-by-step guides  (please, do not link me guides with other config, like iscsi or with additional servers, I have seen a lot of them :()
    thanks
    You cannot do what you want with a Microsoft built-in tools. MSFT require you to take your SAS disks away from your servers, buy a SAS JBOD (better more then one to get advantage of a so-called "enclosure awareness" and avoid single point of failure) and
    configure Clustered Storage Spaces with now *external* SAS drives. See:
    How to Configure a Clustered Storage Space in Windows Server 2012
    http://blogs.msdn.com/b/clustering/archive/2012/06/02/10314262.aspx
    Prerequisites
             A minimum of three physical drives, with at least 4 gigabytes (GB) capacity each, are required to create a storage
    pool in a Failover Cluster.
             The clustered storage pool MUST be comprised
    of Serial Attached SCSI (SAS) connected physical disks. Layering any form of storage subsystem, whether an internal RAID card or an external RAID box, regardless of being directly connected or connected via a storage fabric, is not supported.
    Windows Server 2012 R2 had flexed out some limitations so now ReFS is supported (but useless as VMs cannot be integrity checked and protected) and now you can use parirty spaces (also useless as they are DOG slow with a typical VM workload dominated by small
    writes, say 4KB write initiates 256KB+ parity stripe read-modify-write update). Core requirements "SAS everywhere" is still there.
    So if you don't want to mess with SAS JBODs you may give a try to a virtual SAN solutions available on the market. They can cluster a pair of hosts (even with a single port SATA drives) w/o any external hardware, only Ethernet required. There are even free
    options available. See:
    Free Virtual SAN
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Also Google (or Bing?) for DataCore and SteelEye as they have very similar native (Windows-based) offerings. + there's bunch of a VM-running storage doing more or less the same.
    Hope this helped a bit :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Long Distance Storage Replication

    Looking for some data on storage replication between data centers connected via VPN. VPN is based on 7206VXR/IOS w/ DS-3 access. DS3 endpoints are the same Tier1 provider. Distance is Miami to/from Chicago.
    Replication fo 20Gb SQL Database (initially) in a 12 hour window. Current SQL copy takes roughly 16 hours.
    EMC Clarion array replication is next on the plate.

    There are many options available to you and would depend on the budget and the infrastructure you already have in place. All of the storage products could provide you with FCIP links. The FCIP link/tunnel would flow inside and as data in the VPN tunnel
    There are modules for the 7200's that act as B-port and would encapsulate the data across a FCIP link and so the the endpoints would be the 7200's. Then the SN5428 can provide the same FCIP connectivity and then the FC ports from the storage could plug directly into the 5428.
    What may be the best option is the MDS with compression and write acceleration. Depending on the encapsulation of the DS3, the MTU can run from 1500 to 4400. So, the MDS could sit off of a gigabit link from the 7200's and make that MTU on the MDS's gigabit interface the MTU of the (MTU of the DS3-the overhead encapsulation of the VPN). Then you could apply compression and/or write acceleration to give you the FCIP link from MDS to MDS and provide the speed that you desire.

  • Configure Storage Replication - where to create Source Log Disk

    Hey guys,
    I tried to create a Scale-Out-Fileserver storage replication. Unfortunately i can't find an option to create the "source log disk" neither I couldn't find anything about the prereq of this drive?
    Anyone who found the missing piece?

    Prerequisites and answers for most setup questions can be found here:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/f843291f-6dd8-4a78-be17-ef92262c158d/getting-started-with-windows-volume-replication?forum=WinServerPreview&prof=required
    Ned Pyle [MSFT] | Sr. Program Manager for Storage Replica, DFS Replication, Scale-out File Server, SMB 3, probably some other $%#^#% stuff

Maybe you are looking for