Storage Replication

Hi,
I have some doubt on storage replication link configuration.  we have two dc in different location and need to replicate entire data over a L3 MPLS link and we are using EMC Storage with RPA at both side.
Old DC is having 6509 in core/distribution and new DC is having N7k in core/distribution and we are planning to terminate L3 MPLS link on this core at both side.  RPA will be connected to Core Switches.
I did not configure this before dont have any idea whether will it work or  not and how do we connect RPA at both side over L3 link. Or do we required router must.
If anyone have this experience, please help me to established  this link.

Your storage vendor should have published a white paper on implementing storage replication for Oracle ASM environments.
Ask for such a white paper from the sales person.
Hemant K Chitale

Similar Messages

  • Synchronous Storage Replication Process

    Hi,
    I have a question regarding storage replication for synchronous mode. If I describe the process in detail having twin Data Centers called DC-A and DC-B we have:
    1- The server in DC-A writes to the DC-A disk.
    2-  Before the server in DC-A receives an ACK from the DC-A disk, this data is replicated to the other DC disk, DC-B Disk.
    3-  To replicate the data to the other DC we will first ask to the other DC before passing through the DCI “are you ready to receive”?
    4-  We will get the approval from DC-B that is ready
    5-  Then DC-A will send the data through the DCI to arrive to DC-B.
    6-  DC-B disk receives de data and then DC-B disk must send an ACK to DC-A Disk
    7-  Once the ACK is received at DC-A disk, DC-A disk will then send the ACK to the server in DC-A which originated the write action.
    My question is if step 3 and 4 sometimes are counted and sometimes not. I should counted or not? This decision is important for me because it affects by a factor of 2 the distance between the twin DCs.
    Thank you very much for the help.
    Regards,
    J

    Hi,
    just to add more information. My doubt comes from the 2 Cisco live sessions about DCI.
    On this session it says that we must count 4 times the traffic passing through the DCI in order to know the DCI Distance:
    https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=6623&backBtn=true
    On this other session it says that we must count only 2 times:
    https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=7775&backBtn=true
    Who is right then? because the impact is huge.
    Thank you.
    Regards,
    J

  • Is Sun Storage 7000 Storage replication adapter compatible with 2540 series

    We are evaluating VMWare Site Recovery Manager 4.0. Is Sun Storage 7000 Storage replication adapter compatible with 2540 Sun Storage series?

    I would say the short answer would be no. The 2540 is a LSI based device, the SRM adapter for that is written by LSI. The 7000
    series array is based on OpenSolaris and ZFS, quite a different beast.

  • Campus cluster with storage replication

    Hi all..
    we are planning to implement a campus cluster with storage replication over a distance of 4km using remote mirror feature of sun storagetek 6140.
    Primary storage ( the one where quorum resides ) and replicated secondary storage will be in separate sites interconnected with dedicated single mode fiber.
    The nodes of the cluster will be using primary storage and the data from primary will be replicated to the second storage using remote mirror.
    Now in case the primary storage failed completely how the cluster can continue operation with the second storage? what is the procedure ? how does the initial configuration look like?
    Regards..
    S

    Hi,
    a high level overview with a list of restrictions can be found here:
    http://docs.sun.com/app/docs/doc/819-2971/6n57mi28m?q=TrueCopy&a=view
    More details how to set this up can be found at:
    http://docs.sun.com/app/docs/doc/819-2971/6n57mi28r?a=view
    The basic setup would be to have 2 nodes, 2 storageboxes, TrueCopy between the 2 boxes but no crosscabling. The HAStoragePlus resource being part of a service resource group would use a device that had been "cldevice replicate"ed by the administrator. so that the "same" device could be used on both nodes.
    I am not sure how a failover is triggered if the primary storage box failed. But due to the "replication" mentioned above, SC knows how to reconfigure the replication in the case of a failover.
    Unfortunately, due to lack of HDS storage in my own lab, I was not able to test this setup; so this is all theory.
    Regards
    Hartmut
    PS: Keep in mind, that the only replication technology integrated into SC today is HDS TrueCopy. If you're thinking of doing manual failovers anyway, you could have a look at the Sun Cluster Geographic Edition which is more a disaster recovery like configuration that combines 2 or more clusters and is able to failover resource groups including replication; this product already supports more replication technologies and will even more in the future. Have a look at http://docsview.sfbay.sun.com/app/docs/coll/1191.3

  • L3 Link for Storage Replication

    Hi,
    Has anyone experienced on provisioning L3 MPLS link for Storage Replication.  I have two DC at different location and EMC Storage.
    we will have 2 MPLS L3 Link between two DCs to replicate the data.  We have 4 RPA at each side and 2 MPLS Link.  but dont have idea how to connect all these things.  I have Nexus 7k in new DC and 6509 in old DC.
    If anyone has any idea how to provisioned this link, it would great help to me.

    Hi Micky,
    Such table doesn´t exist in standard SAP! You should try to accomplish this in a view!

  • Long Distance Storage Replication

    Looking for some data on storage replication between data centers connected via VPN. VPN is based on 7206VXR/IOS w/ DS-3 access. DS3 endpoints are the same Tier1 provider. Distance is Miami to/from Chicago.
    Replication fo 20Gb SQL Database (initially) in a 12 hour window. Current SQL copy takes roughly 16 hours.
    EMC Clarion array replication is next on the plate.

    There are many options available to you and would depend on the budget and the infrastructure you already have in place. All of the storage products could provide you with FCIP links. The FCIP link/tunnel would flow inside and as data in the VPN tunnel
    There are modules for the 7200's that act as B-port and would encapsulate the data across a FCIP link and so the the endpoints would be the 7200's. Then the SN5428 can provide the same FCIP connectivity and then the FC ports from the storage could plug directly into the 5428.
    What may be the best option is the MDS with compression and write acceleration. Depending on the encapsulation of the DS3, the MTU can run from 1500 to 4400. So, the MDS could sit off of a gigabit link from the 7200's and make that MTU on the MDS's gigabit interface the MTU of the (MTU of the DS3-the overhead encapsulation of the VPN). Then you could apply compression and/or write acceleration to give you the FCIP link from MDS to MDS and provide the speed that you desire.

  • Configure Storage Replication - where to create Source Log Disk

    Hey guys,
    I tried to create a Scale-Out-Fileserver storage replication. Unfortunately i can't find an option to create the "source log disk" neither I couldn't find anything about the prereq of this drive?
    Anyone who found the missing piece?

    Prerequisites and answers for most setup questions can be found here:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/f843291f-6dd8-4a78-be17-ef92262c158d/getting-started-with-windows-volume-replication?forum=WinServerPreview&prof=required
    Ned Pyle [MSFT] | Sr. Program Manager for Storage Replica, DFS Replication, Scale-out File Server, SMB 3, probably some other $%#^#% stuff

  • SQL Server 2008 R2 and storage replication

    Hello All !
    On a Production site, I have a SQL Server 2008 R2 for a fat application. On a DRP site, I have the same SQL Server 2008 R2 fat application.
    The production and DRP sites are interconnected with a physical SAN storage (synchrone replication). SAN LUNs are synchronous replicated (Data + logs).
    In this case, is it necessary to keep logshipping beetween the 2 sites or the LUNs replication is sufficient ?
    Thanks for advance for your ideas / help - Regards - Have a nice day ! RHUM2

    <<In this case, is it necessary to keep logshipping beetween the 2 sites or the LUNs replication is sufficient ?>>
    That question is impossible for us to answer, since we don't know your requirements (SLA etc). But if you re-phrase your question, we can give feeback with which you hopefully can make that decision:
    What can log ahipping protect me from which SAN replication can't? I can think of a couple of things (others are free to chime in):
    Corruption at the page level. SAN replication give you a binary image of the data. If a page is corrupt at site A, it will be corrupt at site B. Log shipping work by retorting transaction log backups.
    Delayed log restore. If a problem happens at site A, you can decide to have delayed log restores at site be and restore a log backup until just prior to the accident.
    Both above assumes that you know what you are doing have basically have planned for these things. But technically, they are examples of things that log shipping allows for.
    Tibor Karaszi, SQL Server MVP |
    web | blog

  • EMC RecoverPoint and storage replication

    Hi,
    I just need some feedback or inputs regarding DR. Can you help me sharing on your thoughts about the EMC Recover point to do a storage level replication for Oracle Database.
    DB Info: Oracle 11Gr2 on Solaris 10 with 2 node racs on ASM on ZFS Storage.
    Best Regards
    Maran.

    <<In this case, is it necessary to keep logshipping beetween the 2 sites or the LUNs replication is sufficient ?>>
    That question is impossible for us to answer, since we don't know your requirements (SLA etc). But if you re-phrase your question, we can give feeback with which you hopefully can make that decision:
    What can log ahipping protect me from which SAN replication can't? I can think of a couple of things (others are free to chime in):
    Corruption at the page level. SAN replication give you a binary image of the data. If a page is corrupt at site A, it will be corrupt at site B. Log shipping work by retorting transaction log backups.
    Delayed log restore. If a problem happens at site A, you can decide to have delayed log restores at site be and restore a log backup until just prior to the accident.
    Both above assumes that you know what you are doing have basically have planned for these things. But technically, they are examples of things that log shipping allows for.
    Tibor Karaszi, SQL Server MVP |
    web | blog

  • Platform migration vi storage replication?

    We have a 10.2.0.4 database running on HPUX platform which uses EMC for it's storage. We want to migrate the database to an AIX server. Is it possible to just clone the luns where the datafiles live using the EMC utilities and then bring up a database on the AIX server since both platform use big endian for the files?
    Thanks.

    923395 wrote:
    We have a 10.2.0.4 database running on HPUX platform which uses EMC for it's storage. We want to migrate the database to an AIX server. Is it possible to just clone the luns where the datafiles live using the EMC utilities and then bring up a database on the AIX server since both platform use big endian for the files?
    Thanks.Sounds plausible. What does it cost to actually test it? If you clone the luns, the live, current database is not going to care about what you do with the clones. And whether it works or not, you'll know the answer first hand instead of taking as gospel something you 'read on the internet'.
    It must be true, I read it on the internet. They can't put anything on the internet that isn't true.
    Where did you read that?
    On the internet.

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

  • Storage groups & Replication

    What is the underlying technology that the app volumes storage replication mechanism uses when setting up storage groups?
    Which ports need to be opened between two ESXi hosts (or two groups of ESXi hosts (Clusters))?
    I am having issues with replicating to a datastore located on a particular cluster and I was wondering if it could be related to specific ports that might not be open. The error displayed in vCenter is "Cannot connect to host"

    Hi,
    Just change the server you're pointing at (-Server) and adjust the Where clause as necessary for each iteration. That'll be the quickest fix.
    Alternatively you can use ForEach-Object and loop.
    Don't retire TechNet! -
    (Don't give up yet - 13,085+ strong and growing)
    Thanks for the quick reply. Server 1 has 4 sg on it that I would love for it to pull. If I did a ForEach-object and loop how would that look? I am not that great at scripting lol

  • HANA High Availability System Vs Storage Vs VIP failover

    Dear Experts,
    Hope your all doing great. I would like to seek your expertise on HANA high availability best practice. We have been deciding to use TDI for BW on HANA. The next big question for us is how make it available atleast 99.99%.
    I was going through multiple documents, SDN forums, etc..but would like to see how the experts are performing in real time.
    My view -
    Virtual IP failover is a common HA practice which have been used to failover CI / DB hosts depends on failure/maintenance. In this case, both nodes can be used to run app servers.
    System replication - HANA based required secondary standby node, which doesn't accept user requests, but replicate the database from primary using logs after initial data snapshot either synchronous or Asynchronous. (Can be used as HA or DR - if servers are between different data centers).
    Storage replication - HANA based required secondary standby node, which replicates SAN for HA/DR.
    Could you please provide your expertise method you followed for HANA HA and what are the pros, cons and challenges that you have faced or facing.
    Thanks
    Yoga

    Thanks forbrich
    Do you know any specific doc that describes the installation and configuration steps of 10g RAC on NAS? If possible, can you provide some link that I could use to perform this task?
    I have done RAC installations on SAN without any problems and its something I'm fairly experienced with. With NAS I am not really comfortable since I can't seem to find any documentation that describes step by step installation procedure or guidelines for that matter.
    Thank you for your input
    Best Regards
    Abbas

  • LDOM LiveMigration on the repliacted storage

    Hello,
    I have a challenge to provide a DR solution (switch hosts to secondary DC) using LDOMs and LiveMigration feature. Seems easy, but there is one problem... SAN storage is replicated, and the replica is read-only until the direction of replication process is reverted, then it becomes read-write. So in the middle of LiveMigation process, at the moment when LDOM is paused on the source machine and right before it is run on the target machine, something should tell disk-arrays to switch the replication direction... Any idea how to do it? Is there any feature in the OVM for SPARC itself that could help, or any other software maybe? Has anyone don such thing already?
    Any help is appreciated.
    Jakub

    Hello Jakub,
    doing such a storage replication change automatically depends on a lot of details.
    Solaris Cluster can doing this within a Campus Cluster. In case of Campus Cluster the Solaris Cluster nodes are located in different data centers. Details about Campus Cluster available in
    Campus Clustering With Oracle Solaris Cluster Software
    For Solaris Cluster 3.3 3/13 update2 we support this with Hitachi TrueCopy, Hitachi Universal Replicator, and EMC SRDF software. Details in:
    Using Storage-Based Data Replication Within a Cluster of SC33u2 Admin Guide
    For Solaris Cluster 4.x we support this with EMC SRDF software. Details in:
    Using Storage-Based Data Replication Within a Cluster of SC42 Admin Guide
    Solaris Cluster does also provide an agent to failover guest LDom’s. Details in:
    Oracle Solaris Cluster Data Service for Oracle VM Server for SPARC Guide
    But such a setup/configuration includes a lot of details which specific versions of software and hardware is required and qualified. Therefore I would recommend to engage Advanced Customer Services (ACS) for such Campus Cluster installation/planning.
    Which shared storage you are using?
    Hth,
       Juergen

  • Storage level redundancy - DR solution

    Database:Oracle10g Enterprise release 2 with partition option, 64bit
    Type: OLTP (High number of transaction - 3000 SQL per second)
    To design the high availability solution for database, Oracle provides Datagaurd Physical/Logical
    Please validate the following scenario to implement the DR solution using Storage level option for OLTP database (number of transaction - 3000 SQL per second)
    1. Two database server – Primary, standby
    2. Two Storage Sun 6140 – Primary, Standby
    3. Each database server connected to each storage – Primary database connected to Primary storage, Secondary storage connected to Secondary storage
    4. Primary Database is up & running. Secondary database is down.
    5. Secondary storage get synchronized from primary storage online or delay in 15min
    6. In case of failure of primary server, start the secondary database manually
    Is any oracle certifying option available for Storage level replication on Sun platform? During the synchronization, online transaction shouldn’t be effected
    Please reply
    Thanks
    Yogendra

    One big advantage of dataguard over storage replication is that you are transferring fewer changes with dataguard (only redo vectors) rather than all writes on the storage.
    This can be very helpful if the connection between your primary and standby does not have a large amount of bandwidth.
    Your Standby is in another datacenter, right?

Maybe you are looking for