WAN Replication

Is there any more comprehensive guides on setting up WAN replication between clusters ? The main documentation seems fairly limited and doesn't give any examples.
Currently have 2 clusters set up and attempting to setup replication between them, but all I see in the logs is :
####<Jun 27, 2012 2:08:46 AM EDT> <Debug> <WANReplicationDetails> <host2> <ServerA> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <f1d32304a55d0a19:-6bffc807:1382c7f5779:-8000-000000000000005a> <1340777326327> <BEA-000000> <SecurePersistenceServiceImpl.getInstance subject for t3://host1:7001,host1:7002,host1:7003 is principals=[CrossDomainUser, CrossDomainConnectors]>
####<Jun 27, 2012 2:08:46 AM EDT> <Debug> <WANReplicationDetails> <host2> <ServerA> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <f1d32304a55d0a19:-6bffc807:1382c7f5779:-8000-000000000000005a> <1340777326332> <BEA-000000> <Failed to get initial context t3://host1:7001,host1:7002,host1:7003>
Specifically, some info on cross domain communication and what the remote cluster address should be set to would be great.
Thanks

Hi,
Good questions. The transport for Push Replication is vanilla TCP/IP. I
The way it works is that the receiving cluster needs to declare a TCP Proxy Service
with a local address and port.
Push Replication services which are publishing to that cluster then have to
declare a remote invocation service and a publishing service (one for every
cache and destination cluster).
Typically Push Replication batches up updates and then pushes it out the door
to the destination by remotely invoking a "local" publisher. This means something
that is local to the target and knows how to iterate through a batch of updates and
put them in the correct target cache.
Batch publishing has two settings which you will want to adjust depending on your load.
One is batch size (i.e. the number of entries that are batched and published in one visit
to the target cluster.) The other is the delay. How long does the Publishing Service sleep
before it wakes up to see if more stuff is around to be published.
Regards,
Bob

Similar Messages

  • WAN replication not working for objects

    I am trying out the WAN replication example from the Tangosol website. I got the Boston-London example working. However, it seems to only work for simple types like String, int, etc. Objects like hibernate, compressed string are not replicated across. Is more config added for this to work?

    Hi Sham,
        In one of environment we are using publish and author server. Custom comments are replicating on Author environment.  We are not using custom workflow or JCR observation.Created one more environment for Production setup where 2 publish and 1 author instance. For this comment are not auto replicated on Author instance.
    Default comment modification and comment activation launcher are in place, no change in this. Does it not working due to additional Publish Instance and if so what additional setting will required to address it ?
    Thanks
    Yogesh

  • Must the client always be up before the master for WAN replication to work?

    From our testing, we have noticed that the client must always be up before the master in order for WAN replication to work. Is this the case? If so, is there some kind of flag in Tangosol to allow the master to reconnect to client?

    Try the following changes to the JS file
    Lines 103 and 104 change the values
    this.showDelay = 100; // was 250
    this.hideDelay = 200; // was 600
    Comment out line 286
    Spry.Widget.MenuBar.prototype.bubbledTextEvent = function()
    //    return Spry.is.safari && (event.target == event.relatedTarget.parentNode || (event.eventPhase == 3 && event.target.parentNode == event.relatedTarget));
    Comment out line 366 and add new lines 366 and 367
    var self = this;
    this.addEventListener(listitem, 'click', function(e){self.Click(listitem, e);}, false);
    this.addEventListener(listitem, 'click', function(e){self.mouseOver(listitem, e);}, false);
    //   this.addEventListener(listitem, 'mouseover', function(e){self.mouseOver(listitem, e);}, false);
    this.addEventListener(listitem, 'mouseout', function(e){if (self.enableKeyboardNavigation) self.clearSelection(); self.mouseOut(listitem, e);}, false);
    I have not tested the above changes ontouch screens; they do seem to work Ok on desktops.
    NOTE: Line numbers could be different because of the difference in our versions.

  • Diff between WAN replication and MAN replication

    Hi All,
    Kinldy tell me differnece between WAN replication and MAN replication in cluster in weblogic server.
    Thanks in advance.
    Balaji kumar

    MAN replication uses in-memory replication to keep the primary and secondary copies of the session
    on different clusters. It always attempts to locate the secondary session on the remote cluster.
    If connectivity between the data centers is lost, the local cluster will replicate the session within
    the cluster. On the first request for the session once inter data center connectivity is restored, WebLogic
    Server will relocate the session’s secondary to the remote cluster.
    WAN replication uses in-memory replication within the primary cluster and asynchronous
    replication via a database to keep a third copy of the session at the remote data center.
    The database-backed replication mechanism makes the session data available to the other data center's
    cluster should data centerlevel failover occur. The local server buffers its primary session updates in
    an in-memory buffer that is periodically flushed. The database-backed replication has two modes of
    operation:
    - Local cluster periodically flushed the in-memory buffer to its local database. Cross data center
    replication is handled externally using some sort of database level replication technology
    of your choosing.
    - Local cluster uses RMI to push updates periodically from the in-memory buffer to the
    remote cluster. The remote cluster writes these updates to its local database.

  • Queries related to Replication concepts

    Hi,
    I have following queries w.r.t. replications
    1) What is subscriber database and whta's the purpose of this.
    2) What' the difference between the Standby database and Subscriber database.
    3) if I have a standby database then why do we need subscriber database
    4) Can i have more then one standby database.
    Regards,
    Harmeet Kaur

    The A/S pair concept goes as follows:
    - Two 'master' databases as a tightly coupled pair; at any moment one has the 'active' role and the other has the 'standby' role.
    - There can only ever be two master databases within the overall configuration
    - Updates are only allowed at the active, the standby is read only
    - Cache tables (if used) are present in both masters as actual cache groups
    - Cache autorefresh drives into the active master and is replicated to the standby, updates to AWT cache groups happen at the active and are replicated to the standby; the standby propagates them to Oracle
    - Role switch within the pair is easy and can be as a result of a failover or a managed switchover
    - The two masters must be on the same LAN or at least the same network with LAN characteristics and their system clocks must be aligned to within 250 ms
    - Replication between the masters can be asynchronous, return receipt (on request) or return twosafe (on request)
    - Subscribers are optional. You can have 0 to 128 of them.
    - Subscribers are always read only
    - Subscribers can reside on the same LAN as the masters or remotely (e.g. across a WAN)
    - Replication to the subscribers is always asynchronous
    - Cache tables (if used) are present on the subscribers as regular (non-cache) tables
    - While it is possible to convert a subscriber to a master this will mean that it is no longer part of this A/S pair setup. Thsi is usually only done in a DR scenario where a remote master is being used to instantiate a new A/S pair at the disaster recovery site.
    I hope this clarifies the use of subscribers. They are primarily used for:
    1. Read scale out (reader farm)
    2. Simple DR
    3. Oracle/AWT DR (advanced use case)
    Chris

  • Native PULL across WAN

    Does anyone know if the latest version of Coherence provides a native PULL mechanism from across a WAN replication scheme?

    To clarify the question
    We know about Coherence “Push Replication Pattern”, which can be used to replicate data between Grids over WAN, and which was introduced about 4 years ago.
    Since that time,
    1)     if there are other mechanisms were introduced by Oracle/Coherence which can be used to replicate data across WAN?
    2)     and, in particular, if Coherence has its own kind of “PULL replication
    Thanks

  • Multi-Master Replication over the WAN

    DS verison: 5.1 sp1
    Did any one implement multi-master replication across the WAN or different IP subnets?

    Sun does mention about DS 5.2 being better than DS 5.1 in WAN based Muli-Master replication with respect to replication performance. I wanted to see if any one out there had implemented (or even played with this topology in their labs) it with out any major hickups.
    Thank you!

  • Slow merge replication over a WAN link - only downloads

    Hi all,
    We've been using SQL Server merge replication for a few years to synchronise data between our data centres, but we are now suffering with a big performance issue. This may be because the amount of data we are synchronising has increased a lot this year.
    Our publisher is an always-on data centre in the UK. Our subscriber is a mobile data centre that travels around the world and is on for periods of up to a week at a time, approx. 25 times a year. However, it also spends the same amount of time (if not more)
    switched off whilst on its travels - it is a well travelled data centre!
    We have 5 database that we synchronise on these servers. However, one of our databases has high numbers of data changes between periods of subscriber downtime and our issue is that it take days to catch up when the server is powered up - the other databases
    are fine.
    Downloads from publisher to subscriber run at about 1.5 rows a second (which is annoying when we have hundreds of thousands of rows) but strangely uploads from subscriber to publisher run about ten times faster.
    Things I have checked / tried:
    all tables have non-clustered primary keys on guid columns that have the rowguid property set
    changing the generation levelling threshold doesn't help
    setting the agent profile to high volume doesn't help
    running a trace at the publisher and subscriber shows the queries are all running very fast (less than 20 m/s generally, but there are gaps of 200 m/s or so between some batches of queries)
    analysis on our WAN link shows we have huge amounts of bandwidth spare
    analysis on our servers show we have huge amounts of Ram and CPU spare
    Some of the places the subscriber is at do suffer from high latency but this doesn't seem to have an impact - 300 m/s or 100m/s and we still get the same poor performance.
    One things I did wonder about - does the replication confirm to the publisher every time it has successfully processed a row at the subscriber? If we have thousands of rows and there is a latency on the line will this compound the issue if it confirms each
    item? If this does happen, is there a way to batch up messages between publisher and subscriber?
    Any help that you can offer will be gladly received!
    Thanks
    Mark

    1) merge replication processes the uploads/downloads in batches. So no, there is no confirmation for each row processes, but there is per batch. If there is an error/conflict a batch is retried as singletons (a single row processed at a time), so you need
    to minimize conflicts/errors.
    2) No, but you should use pull and wan accelerators for max performance and change your profile to use 2000 for the following:
    MaxDownloadChanges 
    MaxUploadChanges
    UploadGenerationsPerBatch  
    DownloadGenerationsPerBatch 
    UploadReadChangesPerBatch 
    DownloadReadChangesPerBatch 
    UploadWriteChangesPerBatch 
    DownloadWriteChangesPerBatch 
    set network packet size to a large value - you will need to work with your wan engineer to find out what works best. 32k will likely be best.
    Also you may want to copy a file of a known size from your publisher to your subscriber and then from your subscriber to your publisher to see if there is a significant difference in time. For drives will yield significant differences.
    I have found that you may have a trigger which is generating a large volume of unnessary changes. You will need to see what table/article is generating all the changes and evaluate whether the changes are legit.
    Precomputed partitions should speed up the enumeration times. Ensure you have supporting indexes - the missing indexes DMV helps here.
    Lastly you may find that a reinitialization is faster than processing the changes.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • Is It Possible to Add a Fileserver to a DFS Replication Group Without Connectivity to FSMO Roles Holder DC But Connectivity to Site DC???

    I apologize in advance for the rambling novella, but I tried to include as many details ahead of time as I could.
    I guess like most issues, this one's been evolving for a while, it started out with us trying to add a new member 
    to a replication group that's on a subnet without connectivity to the FSMO roles holder. I'll try to describe the 
    layout as best as I can up front.
    The AD only has one domain & both the forest & domain are at 2008R2 function level. We've got two sites defined in 
    Sites & Services, Site A is an off-site datacenter with one associated subnet & Site B with 6 associated subnets, A-F. 
    The two sites are connected by a WAN link from a cable provider. Subnets E & F at Site B have no connectivity to Site A 
    across that WAN, only what's available through the front side of the datacenter through the public Internet. The network 
    engineering group involved refuses to route that WAN traffic to those two subnets & we've got no recourse against that 
    decision; so I'm trying to find a way to accomplish this without that if possible.
    The FSMO roles holder is located at Site A. I know that I can define a Site C, add Subnets E & F to that site, & then 
    configure an SMTP site link between Sites A & C, but that only handles AD replication, correct? That still wouldn't allow me, for example, 
    to enumerate DFS namespaces from subnets E & F, or to add a fileserver on either of those subnets as a member to an existing
    DFS replication group, right? Also, root scalability is enabled on all the namespace shares.
    Is there a way to accomplish both of these things without transferring the FSMO roles from the original DC at Site A to, say, 
    the bridgehead DC at Site B? 
    When the infrastructure was originally setup by a former analyst, the topology was much more simple & everything was left
    under the Default First Site & no sites/subnets were setup until fairly recently to resolve authentication issues on 
    Subnets E & F... I bring this up just to say, the FSMO roles holder has held them throughout the build out & addition of 
    all sorts of systems & I'm honestly not sure what, if anything, the transfer of those roles will break. 
    I definitely don't claim to be an expert in any of this, I'll be the first to say that I'm a work-in-progress on this AD design stuff, 
    I'm all for R'ing the FM, but frankly I'm dragging bottom at this point in finding the right FM. I've been digging around
    on Google, forums, & TechNet for the past week or so as this has evolved, but no resolution yet. 
    On VMs & machines on subnets E & F when I go to DFS Management -> Namespace -> Add Namespaces to Display..., none show up 
    automatically & when I click Show Namespaces, after a few seconds I get "The namespaces on DOMAIN cannot be enumerated. The 
    specified domain either does not exist or could not be contacted". If I run a dfsutil /pktinfo, nothing shows except \sysvol 
    but I can access the domain-based DFS shares through Windows Explorer with the UNC path \\DOMAIN-FQDN\Share-Name then when 
    I run a dfsutil /pktinfo it shows all the shares that I've accessed so far.
    So either I'm doing something wrong, or, for some random large, multinational company, every sunbet & fileserver one wants 
    to add to a DFS Namespace has to be able to contact the FSMO roles holder? Or, are those ADs broken down with a child domain 
    for each Site & a FSMO roles holder for that child domain is located in each site?

    Hi,
    A DC in siteB should helpful. I still not see any article mentioned that a DFS client have to connect to PDC every time trying to access a DFS domain based namespace.
    Please see following article. I pasted a part of it below:
    http://technet.microsoft.com/en-us/library/cc782417(v=ws.10).aspx
    Domain controllers play numerous roles in DFS:
    Domain controllers store DFS metadata in Active Directory about domain-based namespaces. DFS metadata consists of information about entire namespace, including the root, root targets, links, link targets, and settings. By default,root servers
    that host domain-based namespaces periodically poll the domain controller acting as the primary domain controller (PDC) emulator master to obtain an updated version of the DFS metadata and store this metadata in memory.
    So Other DC needs to connect PDC for an updated metadata.
    Whenever an administrator makes a change to a domain-based namespace, the
    change is made on the domain controller acting as the PDC emulator master and is then replicated (via Active Directory replication) to other domain controllers in the domain.
    Domain Name Referral Cache
    A domain name referral contains the NetBIOS and DNS names of the local domain, all trusted domains in the forest, and domains in trusted forests. A
    DFS client requests a domain name referral from a domain controller to determine the domains in which the clients can access domain-based namespaces.
    Domain Controller Referral Cache
    A domain controller referral contains the NetBIOS and DNS names of the domain controllers for the list of domains it has cached. A DFS client requests a domain controller referral from a domain controller (in the client’s domain)
    to determine which domain controllers can provide a referral for a domain-based namespace.
    Domain-based Root Referral Cache
    The domain-based root referrals in this memory cache do not store targets in any particular order. The targets are sorted according to the target selection method only when requested from the client. Also, these referrals are based on DFS metadata stored
    on the local domain controller, not the PDC emulator master.
    Thus it seems to be acceptable to have a disconnect between sites shortly when cache is still working on siteB.
    If you have any feedback on our support, please send to [email protected].

  • Event ID: 5014, 5004 The DFS Replication Service is stopping communication with partner / Error 1726 (The remote procedure call failed.)

    I'm replicating between two servers in two sites (Server A - Server 2012 R2 STD, Server B - Server 2008 R2) over a VPN (Sonicwall Firewall).  Though the initial replication seems to be
    happening it is very slow (the folder in question is less than 3GB).  I'm seeing these in the event viewer every few minutes:
    The DFS Replication service is stopping communication with partner PPIFTC for replication group FTC due to an error. The service will retry the connection periodically.
    Additional Information:
    Error: 1726 (The remote procedure call failed.)
    and then....
    The DFS Replication service successfully established an inbound connection with partner PPIFTC for replication group FTC.
    Here are all my troubleshooting steps (keep in mind that our VPN is going through a SonicWall <--I increased the TCP timeout to 24 hours):
    -Increased TCP Timeout to 24 hours 
    -Added the following values on both sending and receiving members and rebooted server
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
    Value =DisableTaskOffload
    Type = DWORD
    Data = 1
    Value =EnableTCPChimney
    Type = DWORD
    Data = 0
    Value =EnableTCPA
    Type = DWORD
    Data = 0
    Value =EnableRSS
    Type = DWORD
    Data = 0
    ---------------------------------more troubleshooting--------------------------
    -Disabled AntiVirus on both members
    -Made sure DFSR TCP ports 135 & 5722 are open
    -Installed all hotfixes for 2008 R2 (http://support.microsoft.com/kb/968429) and rebooted
    -Ran NETSTAT –ANOBP TCP and the DFS executable results are listed below:
    Sending Member:
    [DFSRs.exe]
      TCP    10.x.x.x:53            0.0.0.0:0             
    LISTENING       1692
    [DFSRs.exe]
      TCP    10.x.x.x:54669        
    10.x.x.x:5722          TIME_WAIT       0
      TCP    10.x.x.x:54673        
    10.x.x.x:5722          ESTABLISHED     1656
     [DFSRs.exe]
      TCP    10.x.x.x:64773        
    10.x.x.x:389           ESTABLISHED     1692
    [DFSRs.exe]
      TCP    10.x.x.x:64787        
    10.x.x.x:389           ESTABLISHED     1656
     [DFSRs.exe]
      TCP    10.x.x.x:64795        
    10.x.x.x:389           ESTABLISHED     2104
    Receiving Member:
    [DFSRs.exe]
      TCP    10.x.x.x:56683        
    10.x.x.x:389           ESTABLISHED     7472
     [DFSRs.exe]
      TCP    10.x.x.x:57625        
    10.x.x.x:54886         ESTABLISHED     2808
    [DFSRs.exe]
      TCP    10.x.x.x:61759        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61760        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61763        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61764        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61770        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61771        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61774        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61775        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61776        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61777        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61778        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61779        
    10.x.x.x:57625         TIME_WAIT       0
      TCP    10.x.x.x:61784        
    10.x.x.x:52757         ESTABLISHED     7472
    [DFSRs.exe]
      TCP    10.x.x.x:63661        
    10.x.x.x:63781         ESTABLISHED     4880
    ------------------------------more troubleshooting--------------------------
    -Increased Staging to 32GB
    -Opened the ADSIedit.msc console to verify the "Authenticated Users" is set with the default READ permission on the following object:
    a. The computer object of the DFS server
    b. The DFSR-LocalSettings object under the DFS server computer object
    -Ran
    ping <var>10.x.x.x</var> -f -l 1472 and got replies back from both servers
    -AD replication is successful on all partners
    -Nslookup is working so DNS is working
    -Updated NIC drivers on both servers
    - I ran the following to set the Primary Member:
    dfsradmin Membership Set /RGName:<replication group name> /RFName:<replicated folder name> /MemName:<primary member> /IsPrimary:True
    Then Dfsrdiag Pollad /Member:<member name>
    I'm seeing these errors in the dfsr logs:
    20141014 19:28:17.746 9116 SRTR   957 [WARN] SERVER_EstablishSession Failed to establish a replicated folder session. connId:{45C8C309-4EDD-459A-A0BB-4C5FACD97D44} csId:{7AC7917F-F96F-411B-A4D8-6BB303B3C813}
    Error:
    + [Error:9051(0x235b) UpstreamTransport::EstablishSession upstreamtransport.cpp:808 9116 C The content set is not ready]
    + [Error:9051(0x235b) OutConnection::EstablishSession outconnection.cpp:532 9116 C The content set is not ready]
    + [Error:9051(0x235b) OutConnection::EstablishSession outconnection.cpp:471 9116 C The content set is not ready]
    ---------------------------------------more troubleshooting-----------------------------
    I've done a lot of research on the Internet and most of it is pointing to the same stuff I've tried.  Does anyone have any other suggestions?  Maybe I need to look somewhere
    else on the server side or firewall side? 
    I tried replicating from a 2012 R2 server to another 2012 server and am getting the same events in the event log so maybe it's not a server issue. 
    Some other things I'm wondering:
    -Could it be the speed of the NICs?  Server A is a 2012 Server that has Hyper-V installed.  NIC teaming was initially setup and since Hyper-V is installed the NIC is a "vEthernet
    (Microsoft Network Adapter Multiplexor Driver Virtual Switch) running at a speed of 10.0Gbps whereas Server B is running a single NIC at 1.0Gbps
    -Could occasional ping timeout's cause the issue?  From time to time I get a timeout but it's not as often as the events I'm seeing.  I'm getting 53ms pings.  The folder
    is only 3 GB so it shouldn't take that long to replicate but it's been days.  The schedule I have set for replication is mostly all day except for our backup times which start at 11pm-5am.  Throughout the rest of the time I have it set anywhere from
    4Mbps to 64 Kbps.  Server A is on a 5mb circuit and Server B is on a 10mb circuit. 

    I'm seeing the same errors, all servers are running 2008 R2 x64. Across multiple sites, VPN is steady and reliably.
    185 events from 12:28:21 to 12:49:25
    Events are for all five servers (one per office, five total offices, no two in the same city, across three states).
    Events are not limited to one replication group. I have quite a few replication groups, so I don't know for sure but I'm running under the reasonable assumption that none are spared.
    Reminder from original post (and also, yes, same for me), the error is: Error: 1726 (The remote procedure call failed.)
    Some way to figure out what code triggers an Event ID 5014, and what code therein specifies an Error 1726, would extremely helpful. Trying random command line/registry changes on live servers is exceptionally unappealing.
    Side note, 1726 is referenced here:
    https://support.microsoft.com/kb/976442?wa=wsignin1.0
    But it says, "This RPC connection problem may be caused by an unstable WAN connection." I don't believe this is the case for my system.
    It also says...
    For most RPC connection problems, the DFS Replication service will try to obtain the files again without logging a warning or an error in the DFS Replication log. You can capture the network trace to determine whether the cause of the problem is at the network
    layer. To examine the TCP ports that the DFS Replication service is using on replication partners, run the following command in a
    Command Prompt window:
    NETSTAT –ANOBP TCP
    This returns all open TCP connections. The connections in question are "DFSRs.exe", which the command won't let you filter for.
    Instead, I used the NETSTAT command as advertised, dumping output to info.txt:
    NETSTAT -ANOBP TCP >> X:\info.txt
    Then I opened Excel and manually opened the .TXT for the open wizard. I chose fixed-width fields based on the first row for each result, and then added a column:
    =IF(A3="Can not", "Can not obtain ownership information", IF(LEFT(A3,1) = "[", A3&B3&C3, ""))
    Dragging this down through the entire file let me see that row (Row F) as the file name. Some anomalies were present but none impacted DFSrs.exe results.
    Finally, you can sort/filter (I sorted because I like being able to see everything, should I choose to) to get just the results you need, with the partial rows removed from the result set, or bumped to the end.
    My server had 125 connections open.
    That is a staggering number of connections to review, and I feel like I'm looking for a needle in a haystack.
    I'll see if I can find anything useful out, but a better solution would be most wonderful.

  • Data affinity between clusters separated via WAN

    Hi,
    I need your help, again! :)
    In the scenario where I have two clusters separated via WAN, I want to know if it is possible to create data affinity to each cluster.
    That is, this scenario is not the same as Push Replication Pattern solve, because I want to avoid the data management conflict between the updates on the both clusters. If I create data affinity to each cluster, the Cluster A is responsible for data objects X and Y (and the same data object in the other cluster is considered as secondary/backup) and Cluster B is responsible for data objects W and Z (also has a backup on Cluster A).
    Thanks.
    Regards,
    André

    Hi ggleyzer,
    Thanks for the help.
    But, how can I routing different keys to different caches, if I want to have two clusters with the same information and caches due to possible disaster recover scenario?
    The scenario that I imagine is almost the same as Push Replication Pattern - Active-Active. (http://coherence.oracle.com/display/INCUBATOR/Auction+Site+Demo+1.0.0)
    I want to have 2 Clusters (e.g. "A" and "B") and have exactly the same information in each one. However, I want to create affinity of objects to each Cluster, for example, the objects X and Y are processed by "ClusterA" (even the request arrive to "ClusterB") but still have of copy of them in "ClusterB" (X' and Y', just for the case of disaster). So: can I specify the namedCache of "ClusterA" to objects X and Y (having a copy of that objects on "ClusterB") and assure that all processing of this type of objects are done by "ClusterA"?
    If it's possible, do you think that approach is more suitable than Push Replication Pattern? A problem that occurs with this pattern is the Conflit Resolver. However, I don't know which problems I will face with the other. Maybe the main disadvantage is the information crossed over the WAN, because "ClusterB" can send processing to objects that has affinity to "ClusterA".
    Thanks,
    Regards,
    André

  • Transactional Replication - Slow running sp_MSget_repl_commands

    We are running transaction replication, with a publisher and a single subscriber located in different data centres. The distribution database is located with the subsciber.
    We get high replication latency regularly, and during those times I have identified the bottleneck as the execution of sp_MSget_repl_commands by the distribution reader. Monitoring this thread in the DMV dm_exec_requests, it is consistently getting long waits
    for wait_type ASYNC_NETWORK_IO.
    Why would distribution reader by waiting on network? The thread and distribution database are all located on the same server. Is there something else that the distribution reader thread does that requires accessing the publisher?

    Do you have a remote distributor?
    Expect the ASYNC_NETWORK_IO wait type when the distribution agent is replicating over a wan/lan. Pull subscriptions may work better.
    However, you need to determine what your latency is. If your replication latency is high, server minutes, or something which does not work for you - you have a problem. Otherwise it is just an inherent limit you can't do much about.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • Transactional replication cross-country - where should distributor go?

    Hi All,
    SQL Server 2005 standard edition using transactional replication from Montreal to Vancouver.
    Our current distributor is region-based, so it is within our data centre in Toronto. So basically it goes MTL--> TO -> VAN.
    However, i'm thinking that the extra step to TO is unnecessary because its in another physical location and the extra hope/latency from MTL-->TO before going to VAN adds overhead..
    I know that a separate distributor is best practice but in this case, 3 physical locations, each with ~50ms latency between each one, is it better to have the publisher and distributor on the same server (in MTL) and then just go directly to the subscriber
    (VAN)? We don't have enough resource within MTL to build a separate distributor there so it would go on the same server as publisher.
    DB size is 135 MB, all tables synchronizing.
    Thanks in advance for any input.

    The network hop is normally insignificant when replicating locally, but it can be an issue when replicating across a WAN. IT should be local not only to factor out the network latency but also to simplify your recovery and DR plans.
    A remote distributor is required when you have high cpu issues on your publisher. If you don't have this or your workload is not characterized by high cpu you will be able to get away with a local distributor.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • Replication between 130 nodes and 1 Data Center

    Hi everyone.
    I have 130 database nodes (Oracle Standard Edition One) with a big distance of separation, and 1 Data Center with 3 nodes (Oracle Real Application Cluster 10g R2). The connection between nodes and datacenter is through various ISP ( WAN).
    I have exactly the same model design of database in nodes and datacenter.
    DataCenter is a repository of data for reporting to directors and dictate the business rules to guide all nodes.
    Each node have approximately 15 machines connected with desktop application.
    In other words Desktop Application with a Backend Database (node).
    My idea of replication is not instantly, when a transaction commit in a node then replicate to datacenter. Also over nigth replicate images because is heavy, approximately 1 mg per image. Each image correspond to one transaction.
    On the orher hand i have to replicate some data from datacenter to nodes, business rule, for example: new company names, new persons, new prohibitions, etc.
    My problem is to determine th best way to replicate data through nodes to datacenter.
    Please somebody could suggest me the best solution.
    Thanks in advanced.

    Last I checked, Streams and multi-master replication require enterprise edition databases at both ends, which rules them out for the sort of deployment you're envisioning.
    If a given table will only ever be modified on nodes or on the master site, never both, you can build everything as read-only materialized views. This would probably require, though, that the server at the data center have 130 copies of each table, 1 per node. For schemas of any size, this obviously gets complicated very quickly. For asynchronous replcation to work, you'd need to schedule periodic refreshes, which assumes that you have relatively stable internet connections between the nodes and the data center.
    I guess I would tend to question the utility of having so many nodes. Is it really necessary to have so many? Or could you just beef up the master and have everyone connect directly?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Transactional Replication - SQL Server 2012

    Newbie to Replication.
    Configured transactional replication on a database and everything run's fine, but out of all the tables one of the table with couple of columns as VARCHAR(MAX) and with about 3 million records takes 80% of replication duration.
    on bit of research and recommendations, I did configured the server with following
    EXEC sp_configure 'max text repl size', -1 ;
    but still the table take around 80% of the replication time.
    Can someone point me to other areas / best practices / articles to improve Replication performance?
    Sreedhar

    You need to determine why it is taking so long to replicate.
    If it during snapshot application? If so use the initialize from backup option. Is it after the snapshot has been deployed?
    Then you need to use tracer tokens to see if it is the log reader agent or the distribution agent. There is not a lot you can do with the log reader agent and it is not normally the bottleneck. It is likely with the distribution agent.
    If so, I would first try to factor out the network. Replicate this table locally and see if you still have the same latency. if you don't it is the network and you need to try to minimize the network impact. Use a pull subscription and use a WAN accelerator.
    These will have more impact than setting the network packet size.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

Maybe you are looking for

  • Why can't I upgrade to the most recent Firefox with a Mac 10.5.8

    I'm trying to update Firefox 3.0.10. I have a Mac 10.5.8 with 512 MB of SDRam. When I try to upgrade to the most recent Firefox it tells me my computer isn't compatible, but also tells me that Osx 10.5 with 512 MB of ram should work. Any suggestions?

  • Get message "Outlook Calendar Synchronization failed"

    I hope someone can help. When I sync, the hotsync log gives me the following info. Everything syncs fine except the calendar won't sync. The message doesn't give me any info on why it won't sync (except maybe the words "Recovery sync" - not sure why

  • Firmware updater application

    I've been told I need firmware EFI2.x on my MacBook to be able to install a 64 bit Windows 7 version via boot camp.  My OSx 10.6.8 doesn't have a Firmware Updater Application in the utilities folder.  Will I be able to update my firmware?  Is there a

  • Advanced finder

    Is there a way to "make" Finder look like Windows Explorer.  I'm a new MAC user and I'm having trouble moving files around easily.  Any help is appreciated!

  • Photos transition not working on iPad

    Updated my MBPr last night and all seems well there. Changed settings to store all on iCloud (and manage space on laptop).  iPhone 6 Plus also updated and seems to be working just fine - best I can tell is that all the photos in Photo on the Cloud ar