Need suggestion on Data Center Migration

Hi All,
We are currently handling a datacenter migration project where in the souce instance is hosted at Oracle Datacenter ( Texas ).
and the target instance need to be built on **** Hosted datacenter ( U.K).
-> It takes 3days to ship the (cold) backup from U.S datacenter to UK datacenter. We could able to bring up the database using that. But
in order to keep the new Instnace in sync with the source ( which is being used by bussiness) we have to apply all the archvies in this gap.
--> Database size 250G
--> We could manage to ship the archives generated after cold backup directly to FTP server ( in UK network). But there ar around 3000 - 4000 archives to be applied.
We have planned to perform a cancel based recovery here.
In this scenario, need your advice, If we have any other alternative ( Best practice)
-Regards
Vinay

Hi,
have you seen this:
WAN Zero Data Loss Failover with Oracle Data Guard 11.2 using a Near-DR Strategy (Broker version)          [Document 1489359.1]
Regards
Sebastian

Similar Messages

  • Need helip for data center designing

    Sir ,
    I am going to design a data center where the following equipments are the
    1. one router 7609
    2. two core switch (WS-C6509-E)
    3. two firewall (WS-C6506-E, with Firewall blade)
    4. one VOICE ROUTER (CISCO2821with PVDM2-64, VWIC2-2MFT-T1/E1, PVDM2-32)
    5. one Remote Access Server (AS5400XM, AS5000XM 60 Dial Port Feature Card, AS5400 Octal E1/PRI DFC card)
    6. two CALLMANAGER-5.1
    7. multiple no of Cisco IP Phone 7940G with Video Advantage with VT Camera II
    8. one Gatekeeper (2811)
    9. one Internet Router (3845)
    10. one Authentication, Authorization and Accounting (AAA) System
    11. one ISDN RAS 2811 with2-Port Channelized E1/T1/ISDN-PRI Network Module with video conferencing (polycom)
    12. one Network Intrusion Detection/ Prevention System (NIDS)
    13. one NMS
    14. one Content Switch for Server Load Balancing
    15. multiple Video Phone
    16. lots of sever ( mail. Web, storage, etc )
    17. polycom MGC 100
    18. polycom 7000
    also 20 no of 7206 VXR will be connect with 7609 router through lease line
    so.. if u send me some link or some sample design and share some advice where I can gather some idea to design this data center in a proper way
    thanks
    tirtha

    IMO opinion the best place to start is by reading the SRNDs. They can be found here-
    http://www.cisco.com/en/US/netsol/ns656/networking_solutions_design_guidances_list.html
    Hope that helps.

  • Need suggestion for data encryption

    Hello Experts,
    I need your expert opinion on one of the data encryption method. We have some legal compliance to implement data encryption as listed below, lets say we have to apply encryption on 2 tables (1) TAB_A (2) TAB_B.
    (1) Need data encryption on the TAB_A & TAB_B for 2-3 columns and not the entire table.
    (2) Data should not be in readable format, if anyone connect to database and query the table.
    (3) We have reporting services on our tables but reporting services doesn't connect to our schema directly rather they connect to a different schema to which we have given the table Select grant.
    (4) Reports should work as it is, and users should see the data in readable format only.
    (5) There are batch processes which generates the data into these tables and we are not allowed to make any changes to these batch processes.
    This is a business need which has to be delivered. I explored various options such as VPDs, Data encryption methods etc but honestly none of these are serving our business need. There is also a limitation of encrypting data as data volume of quiet high (30TB DB) and generally users query the data on millions of records at a time. Also reports have very tight SLAs as well. If we create any encryption wrapper then decrypt will take longer in reports and will cause the SLA miss for reports.
    Could someone please suggest any better solution to me or if something is inbuilt in Oracle? We are using Oracle 11g.
    Regds,
    Amit.

    you can read about Transparent Data Encryption
    Check
    http://docs.oracle.com/cd/B28359_01/network.111/b28530/asotrans.htm

  • Need suggestions on date range query

    I have a requirement to show the amount of product remaining. There is a table that holds updated "inventory" amounts with a date and tonnage, and a series of transactional tables that detail the individual disbursements from the stockpile. The trick is that the dates for the inventory adjustments may not all be the same, meaning that I need to individually resolve the stockpiles.
    This query will give me the inventory disbursements:
    select FN_STN_KEY(j.FACTORY_ID, j.STATION_ID) as STATION,
      count(j.LOAD_JOB_ID) as LOADS,
               CASE SUM(w.SPOT_WEIGHT)
                 WHEN 0 THEN SUM(NVL(j.MAN_SPOT_WT,0))
                 ELSE SUM(w.SPOT_WEIGHT)
               END TONS
           from TC c, TC_LOAD_JOBS j, SPOT_WEIGHTS w
          where c.TC_ID = j.TC_ID
            and c.DATE_INDEX = w.DATE_INDEX and j.LOAD_RATE_ID = w.LOAD_RATE_ID
            and c.DATE_INDEX BETWEEN to_date('09/01/2009','MM/DD/YYYY') and sysdate
            and FN_STN_KEY(j.FACTORY_ID, j.STATION_ID) in (810,410)
          group by FN_STN_KEY(j.FACTORY_ID, j.STATION_ID);Note that the date and the list of stations in the where clause are dynamic and selected by user. If this was only one station at a time it wouldn't be this complicated.
    This query will give me the last known inventory amount:
    select to_char(MAX(AS_OF_DT),'Mon DD, YYYY'), TONS
      from STATION_LOG
          where AS_OF_DT < sysdate and STN_KEY in (810,410) group by TONS;Again, the date and list of stations are selected by user. They should be identical to those selected for the other query.
    Does anyone have any good ideas on how to combine these two statements into a single report?
    Note: FN_STN_KEY acts as a join function. You don't really want me to get into why there isn't a single unique key to reference.

    Hi,
    I'm trying to follow your descrioption, but lots of things don't make sense to me.
    blarman74 wrote:
    Yeah. I put in some data so I could get the message back to you, then filled in the rest.
    So the user is going to pass in two parameters: The date of the report and the list of stations they want to get an inventory count on. What were the parameters that produced the output you posted before:
    STATION     INITIAL_TONS     USED_TONS     AS_OF_DATE
    810               835500        465100      09/01/2010
    410               495800        366900      09/02/2010
    550               568900        122600      08/31/2010
    What I need the report to do is
    1) take a station from the list
    2) find out the inventory tally from STATION_LOG where the date is the largest date less than the supplied date. This should give me AS_OF_DATE and my initial quantity.
    3) query the data table for all tons hauled from the AS_OF_DATE for that station.
    4) repeat for the next station.So this is what your existing PL/SQL code does. A non procedural language, like SQL, won't follow the same steps, of course.
    The sample data for station_log is:
    INSERT INTO STATION_LOG (1, to_date('08/31/2010','MM/DD/YYYY'), 810, 562500);
    INSERT INTO STATION_LOG (2, to_date('09/02/2010','MM/DD/YYYY'), 410, 495500);
    INSERT INTO STATION_LOG (3, to_date('09/01/2010','MM/DD/YYYY'), 910, 832600);
    INSERT INTO STATION_LOG (4, to_date('12/31/2010','MM/DD/YYYY'), 810, 239800);How do you get the initial_tons in the output above from the data above? Did you mean to post some new sample data for station_log?
    I still get ORA-00928 errors from all the INPUT statements.
    As I said, I can do it inside a loop in PL/SQL, but I got completely stumped on how I could accomplish this in SQL. The trick is that if I can do it in SQL, I can allow the user to export the data to csv using built-in functionality. If I have to do it in PL/SQL, I can't provide the export as easily.
    One more thing I just thought about, I am going to need to use a BETWEEN on the dates of the data I need to grab. I obviously don't want to grab data past another inventory tally record from STATION_LOG for the same station, and I can use an NVL so it cuts off at SYSDATE. I obviously haven't hauled anything in the future ;)I doubt if I'll get enough information to do this for you before I leave on vacation.
    Here's an example of what you need to do using the scott.emp table instead of your station_log table:
    SELECT       job
    ,       hiredate
    ,       sal
    FROM       scott.emp
    ORDER BY  job
    ,            hiredate
    JOB       HIREDATE           SAL
    ANALYST   03-Dec-1981       3000
    ANALYST   19-Apr-1987       3000
    CLERK     17-Dec-1980        800
    CLERK     03-Dec-1981        950
    CLERK     23-Jan-1982       1300
    CLERK     23-May-1987       1100
    MANAGER   02-Apr-1981       2975
    MANAGER   01-May-1981       2850
    MANAGER   09-Jun-1981       2450
    PRESIDENT 17-Nov-1981       5000
    SALESMAN  20-Feb-1981       1600
    SALESMAN  22-Feb-1981       1250
    SALESMAN  08-Sep-1981       1500
    SALESMAN  28-Sep-1981       1250Say we want to find, for each job in a given list, the sal that corresponds to the last hiredate that is no later than the given report_date. (This seems pretty close to what you heed: fior each stn_key in a given list, the quantity that corresponds to the last row that is no later than the given report_date.)
    That is, if we're only interested in the jobs CLERK, MANAGER and PRESIDENT, and only in hiredates on or before December 31, 1981, the output would be:
    JOB       LAST_HIREDA   LAST_SAL
    CLERK     03-Dec-1981        950
    MANAGER   09-Jun-1981       2450
    PRESIDENT 17-Nov-1981       5000That is, we want to ignore all jobs that are not in the given list, and all rows whose hiredate is after the given report_date. Among the rows that remain, we're interested only in the last one for each job.
    Note that the last_sal for CLERK is not 1300 or 1100: those values were after the given report_date. also, the last_sal for CLERK is not 800; that's not the last one of the remaining rows.
    Here's one way to get those results in pure SQL:
    DEFINE       jobs_wanted     = "CLERK,MANAGER,PRESIDENT"
    DEFINE       report_date     = "DATE '1981-12-31'"
    SELECT       job
    ,       MAX (hiredate)                         AS last_hiredate
    ,       MAX (sal) KEEP (DENSE_RANK LAST ORDER BY hiredate)     AS last_sal
    FROM       scott.emp
    WHERE       hiredate     <= &report_date
    AND        ',' || '&jobs_wanted' || ','     LIKE
           '%,' || job                  || ',%'
    GROUP BY  job
    ORDER BY  job
    ;I used substitution variables for the parameters. You could use bind_variables, or hard-code the values instead.
    The WHERE clause is applied before aggreate functions are computed, so rows after &report_date don't matter.
    "MAX (sal) KEEP (DENSE_RANK LAST ORDER BY hiredate)" means the sal that is associated with the last row, in order by hiredate. If there happens to be a tie (that is, two or more rows have exactly the same hiredate, and no row is later) then the highest sal from thsoe rows is returned; that's what MAX means here. Ties may be impossible in your data.
    You need to write a similar query using your station_log table, and join the results of that to your load_data table, including only the rows that have dates between the date in the sub-query (last_hiredate in my example) and the parameter report_date. That can be part of the join condition.

  • Australian data center migration postponed

    Hi everyone,
    Due to some last minute developments, we have decided to postpone the first phase of the Australian datacenter migration, which was scheduled for this weekend.
    We have decided to postpone the migration until we are able to focus our efforts exclusively on the migration and ensure a minimal disruption to your experience.
    A new date will be communicated later, on our blog, forums and dedicated migration site.
    Thank you for your patience and understanding,
    The Adobe Business Catalyst team

    sorry, but that's the wrong forum to post this kind of question. Here you get assistance when you have problems when migrating from foreign databases to Oracle.

  • Migrate Standby ASA to Backup Data Center

    Hello Experts,
    We have backup data center where I am now  planning to provide backup internet service ( in the case where there is internet down or power outage at main server room) .
    I have a pair of Cisco ASA's 5540, one of which I need to move to backup data center ( BDC), Presently I have ADSL router at disaster serve room with static public IP from ISP.
    Currently, I am publishing all my internal resources through ASA. Now my questions, if I move Standby ASA to Disaster Server Room. How I can publish the same internal resources through standby ASA and make it standby as active during the down time of main server room
    Please can anyone suggestion how to achieve this setup. Is is this scenario possible
    Thanking in advance.
    Samir

    Hello,
    I knew it.
    I'll just tell you from the beginning hope it might help you to understand. I appreciate your help.
    Presently at my main data center I'm having a  leased line router and then 2 ASA 5540 (with failover active/standby).
    I was thinking to move 1 ASA to backup disaster server room. In this regard,  I asked earlier how I can still achieve the active/standby after migrating to backup room. But you had anwered my query
    Query 2
    I have got new ADSL service and router  with public static IP at backup server room. Now I moved one of my ASA.
    How can I keep publishing the internal resources ( like access to internal webserver, rdp connection) by using this ADSL service if the main server room is completely down .
    Hope it is clear.
    Thanks

  • Need Suggestion On Real Time Data Accees

    Hi,
    Our application (let say A) runs on * 8.1.7.4.0*
    Another Application (let say B) runs on *10.2.0.4.0*
    Both the application A and B exchange data through DB links.
    Both the application A and B perform select and DML on other tables.
    Application A has views (approx. 10 views) on application B 's table through DB links.
    Application B also has views (approx. 50 views) on application A 's table through DB links.
    Application A has store procedure (approx 10 SPs) through which it does insert/update on two of application B 's table approx 15K call of that procedure per day.
    Application B also calls some of the application A 's procedure to insert/update data in application A 's table.
    Currently both the application's data center are at the same place. But, there is a proposal/need to move application B's data center far away from the current location. Because of these we are in a assumption that DB Link over WAN will create performance issue and searching of probable replacement.
    After of couple of discussion we have identified approx five application A 's tables which are frequently accessed by application B and expect real time data.
    Similarly, Application A expect real time data from two application B 's table.
    'Golden Gate' mechanism came in our discussion, butt it's expensive,
    Can we replace this store procedure code/dml with web service call or so ??
    Can anyone suggest ... what kind of approach we should take here ?

    Views which are residing in application A 's side use many internal tables and one or two application B s table.
    I don't have idea on 'multi-master replication ' ... is it two way replication ..? ... you aware of any documentation I can refer ..? How costly it is (if you can give an idea) ?
    Also, among many tables we need near real time (2 mins interval) replication on 5 tables.
    MVs again have dependencies on DB link, which exactly what are trying to replace (assuming performance issue)
    With fast refresh (let say 2 minutes interval) method on MVs do you think it could cause performance issue specially when data center are at different location ?

  • Is there a way other than redeployment to migrate existing Cloud Services from one data center to another?

    We have a need to correct affinity of Cloud Service, Azure DB, Storage and Mobile Services such that they are on a singl datacenter and do not get impacted by network latency when these components communicate with each other. Also to distribute various environments
    across different data centers we need to realign some environments to mitigate all environments getting impacted if there is outage.
    Existing Cloud Services and related Service Bus queues and topics need to be moved to a different datacenter. Is there a way to do this without having to redeploy cloud services (web and worker roles) or recreate the service bus entities (queues and topics)?
    Also, the existing Cloud Service URL need to be retained without with user authentication won't be possible and hence when completed, the new cloud service should have the same URL.
    Please provide best available options for achieving this or ask question if more information is needed.

    Hi sumeetd,
    As far as I know, currently there is no directly way to move services from one data center to another unless redeployment. You could submit a feature suggestion via this page (http://feedback.azure.com/forums/34192--general-feedback
    ). And at the same time,you could contact with azure support team via the channel below:
    http://www.windowsazure.com/en-us/support/contact/
    Any questions, please feel free to let me know.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Need Suggestion for Archival of a Table Data

    Hi guys,
    I want to archive one of my large table. the structure of table is as below.
    Daily there will be around 40000 rows inserted into the table.
    Need suggestion for the same. will the partitioning help and on what basis?
    CREATE TABLE IM_JMS_MESSAGES_CLOB_IN
    LOAN_NUMBER VARCHAR2(10 BYTE),
    LOAN_XML CLOB,
    LOAN_UPDATE_DT TIMESTAMP(6),
    JMS_TIMESTAMP TIMESTAMP(6),
    INSERT_DT TIMESTAMP(6)
    TABLESPACE DATA
    PCTUSED 0
    PCTFREE 10
    INITRANS 1
    MAXTRANS 255
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    LOGGING
    LOB (LOAN_XML) STORE AS
    ( TABLESPACE DATA
    ENABLE STORAGE IN ROW
    CHUNK 8192
    PCTVERSION 10
    NOCACHE
    STORAGE (
    INITIAL 1M
    NEXT 1M
    MINEXTENTS 1
    MAXEXTENTS 2147483645
    PCTINCREASE 0
    BUFFER_POOL DEFAULT
    NOCACHE
    NOPARALLEL;
    do the needful.
    regards,
    Sandeep

    There will not be any updates /deletes on the table.
    I have created a partitioned table with same struture and i am inserting the records from my original table to this partitioned table where i will maintain data for 6 months.
    After loading the data from original table to archived table i will truncating the original table.
    If my original table is partitioned then what about the restoring of the data??? how will restore the data of last month???

  • Need suggestion to get data from change log table of ODS.

    Hello,
    There is a case where i am loading opportunity header data from header ODS and opportunity item data from item ODS in the opportunity cube.
    Status (1= OPEN, 2= WON ETC) of the opportunity are available only in header ODS and not in item ODS.
    While loading data from header ODS to cube, I am loading it directly but while loading data from item ODS to cube i am using active data table of header ODS as a lookup in the update rule from item ODS to cube. I am selecting status from the active data table of header ODS while loading data from item ODS to cube.
    Since active data table will have only after image records, there is some data mismatch in the report as i am selecting data from active data table of header ODS while loading data from item ODS to cube.
    I need to select data from Change log in order to get before image also instead of active data table in order to overcome this issue. Is there any way by which i can do selection from Change log instead of active data table as change logs are generated at run time.
    Please let me know if you have any suggestions.
    Regards,
    Sanjay Chaurasia.

    Hi,
    You can use the changelog table of the DSO.
    Right click manage the Header DSO, go to the contents tab and click Change Log table. There you can see the technical name of the Change Log table.
    In the update rule Routine, give the tech name of Change log table instead of Active table name.
    Hope it helps.
    Krishna

  • Server Requirements - Always On in a single server? - Needs clustering even one server on another Data Center?

    Hello:
    I have been asking this questions to different forums and got different responses, so I wanted to know if asking to "Microsoft" will give me some good directions. (All in SQL Server 2012, including the OS)
    Question 1.- Always On "HAS" to be configured on a WSFC node? How about in a Single SQL Server. (NO Clustering)?.
    Question 2.- What about our mirroring processes configured and running in single servers, do we have to have WSFC installed before we can upgrade them to Always On?.
    Question 3.- In a case I have WSFC, and configure Always On, can my second or third replica reside in a single SQL Server? (No WSFC). What if I can not have Clustering in a DR Data Center? or I do have only VM's on the DR Center?
    Any help will be greatly appreciated.
    Thanks
    Oscar Campanini

    Hi Oscar,
    Please find the answers below.
    Question 1.- Always On "HAS" to be configured on a WSFC node? How about in a Single SQL Server. (NO Clustering)?.
                    - Yes . Each replica must be on a different node of a WSFC cluster. Without WSFC Cluster you cannot create always on as it relies on the failover capabilities of the
    cluster.
    Question 2.- What about our mirroring processes configured and running in single servers, do we have to have WSFC installed before we can upgrade them to Always On?.
                     - You cannot really upgrade a Database mirroring configuration to Always on , both are different and works differently. Again for Always on each participating
    replica must be on a WSFC cluster
    Question 3.- In a case I have WSFC, and configure Always On, can my second or third replica reside in a single SQL Server? (No WSFC). What if I can not have Clustering in a DR Data Center? or I do have only VM's on the DR Center?
                   - NO all replicas have to be in single nodes of WSFC cluster.
    Note: SQL Server doesnt have to be clustered.
    Consider the following scenario. YOu need to create Always on with a 3 node topology ie 1 Primary, I secondary and 1 readonly secondary.
    YOu need to have all these three nodes a part of Windows Server Failover Cluster. The clustering needs to be done only in the windows level. YOu can install standalone SQL Servers on all these 3 nodes and then condigure them as replica's in ALways on.
    Read these links to clear your questions -
    http://technet.microsoft.com/en-gb/sqlserver/gg490638.aspx
    http://technet.microsoft.com/en-us/library/hh510230(v=SQL.110).aspx
    http://technet.microsoft.com/en-us/library/ff878487(v=sql.110).aspx#ServerInstance
    Note: When I said Always ON I was reffering to Availability Groups.
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • What is the best approach to migrate SharePoint farm from one data center to other datacenter

    We have two web front server and one application server and two instances for database server and we have to migrate complete farm from one data center to other data center with minimal downtime and end user impact.
    Please provide your best input on this.
    Thanks in advance.

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Need suggestion in getting data using JDBC

    Hi all need suggestion,
         i had a VO corresponding to database table.
         when i am try to get the records from that table,
         how can i initialize the particular column value to the
         corresponding VO setter method.
         please do the needful.

    Hello inform2csr,
    Your question is not so clear.
    Can you be more precise?
    What is VO?

  • Ask the Expert: Scaling Data Center Networks with Cisco FabricPath

    With Hatim Badr and Iqbal Syed
    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Cisco FabricPath with Cisco technical support experts Hatim Badr and Iqbal Syed. Cisco FabricPath is a Cisco NX-OS Software innovation combining the plug-and-play simplicity of Ethernet with the reliability and scalability of Layer 3 routing. Cisco FabricPath uses many of the best characteristics of traditional Layer 2 and Layer 3 technologies, combining them into a new control-plane and data-plane implementation that combines the immediately operational "plug-and-play" deployment model of a bridged spanning-tree environment with the stability, re-convergence characteristics, and ability to use multiple parallel paths typical of a Layer 3 routed environment. The result is a scalable, flexible, and highly available Ethernet fabric suitable for even the most demanding data center environments. Using FabricPath, you can build highly scalable Layer 2 multipath networks without the Spanning Tree Protocol. Such networks are particularly suitable for large virtualization deployments, private clouds, and high-performance computing (HPC) environments.
    This event will focus on technical support questions related to the benefits of Cisco FabricPath over STP or VPC based architectures, design options with FabricPath, migration to FabricPath from STP/VPC based networks and FabricPath design and implementation best practices.
    Hatim Badr is a Solutions Architect for Cisco Advanced Services in Toronto, where he supports Cisco customers across Canada as a specialist in Data Center architecture, design, and optimization projects. He has more than 12 years of experience in the networking industry. He holds CCIE (#14847) in Routing & Switching, CCDP and Cisco Data Center certifications.
    Iqbal Syed is a Technical Marketing Engineer for the Cisco Nexus 7000 Series of switches. He is responsible for product road-mapping and marketing the Nexus 7000 line of products with a focus on L2 technologies such as VPC & Cisco FabricPath and also helps customers with DC design and training. He also focuses on SP customers worldwide and helps promote N7K business within different SP segments. Syed has been with Cisco for more than 10 years, which includes experience in Cisco Advanced Services and the Cisco Technical Assistance Center. His experience ranges from reactive technical support to proactive engineering, design, and optimization. He holds CCIE (#24192) in Routing & Switching, CCDP, Cisco Data Center, and TOGAF (v9) certifications.
    Remember to use the rating system to let Hatim and Iqbal know if you have received an adequate response.  
    They might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community Unified Computing discussion forum shortly after the event. This event lasts through Dec 7, 2012.. Visit this support forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Sarah,
    Thank you for your question.
    Spanning Tree Protocol is used to build a loop-free topology. Although Spanning Tree Protocol serves a critical function in these Layer 2 networks, it is also frequently the cause of a variety of problems, both operational and architectural.
    One important aspect of Spanning Tree Protocol behavior is its inability to use parallel forwarding paths. Spanning Tree Protocol forms a forwarding tree, rooted at a single device, along which all data-plane traffic must flow. The addition of parallel paths serves as a redundancy mechanism, but adding more than one such path has little benefit because Spanning Tree Protocol blocks any additional paths
    In addition, rooting the forwarding path at a single device results in suboptimal forwarding paths, as shown below, Although a direct connection may exist, it cannot be used because only one active forwarding path is allowed.
    Virtual PortChannel (vPC) technology partially mitigates the limitations of Spanning Tree Protocol. vPC allows a single Ethernet device to connect simultaneously to two discrete Cisco Nexus switches while treating these parallel connections as a single logical PortChannel interface. The result is active-active forwarding paths and the removal of Spanning Tree Protocol blocked links, delivering an effective way to use two parallel paths in the typical Layer 2 topologies used with Spanning Tree Protocol.
    vPC provides several benefits over a standard Spanning Tree Protocol such as elimination of blocker ports and both vPC switches can behave as active default gateway for first-hop redundancy protocols such as Hot Standby Router Protocol (HSRP): that is, traffic can be routed by either vPC peer switch.
    At the same time, however, many of the overall design constraints of a Spanning Tree Protocol network remain even when you deploy vPC such as
    1.     Although vPC provides active-active forwarding, only two active parallel paths are possible.
    2.     vPC offers no means by which VLANs can be extended, a critical limitation of traditional Spanning Tree Protocol designs.
    With Cisco FabricPath, you can create a flexible Ethernet fabric that eliminates many of the constraints of Spanning Tree Protocol. At the control plane, Cisco FabricPath uses a Shortest-Path First (SPF) routing protocol to determine reachability and selects the best path or paths to any given destination in the Cisco FabricPath domain. In addition, the Cisco FabricPath data plane introduces capabilities that help ensure that the network remains stable, and it provides scalable, hardware-based learning and forwarding capabilities not bound by software or CPU capacity.
    Benefits of deploying an Ethernet fabric based on Cisco FabricPath include:
    • Simplicity, reducing operating expenses
    – Cisco FabricPath is extremely simple to configure. In fact, the only necessary configuration consists of distinguishing the core ports, which link the switches, from the edge ports, where end devices are attached. There is no need to tune any parameter to get an optimal configuration, and switch addresses are assigned automatically.
    – A single control protocol is used for unicast forwarding, multicast forwarding, and VLAN pruning. The Cisco FabricPath solution requires less combined configuration than an equivalent Spanning Tree Protocol-based network, further reducing the overall management cost.
    – A device that does not support Cisco FabricPath can be attached redundantly to two separate Cisco FabricPath bridges with enhanced virtual PortChannel (vPC+) technology, providing an easy migration path. Just like vPC, vPC+ relies on PortChannel technology to provide multipathing and redundancy without resorting to Spanning Tree Protocol.
    Scalability based on proven technology
    – Cisco FabricPath uses a control protocol built on top of the powerful Intermediate System-to-Intermediate System (IS-IS) routing protocol, an industry standard that provides fast convergence and that has been proven to scale up to the largest service provider environments. Nevertheless, no specific knowledge of IS-IS is required in order to operate a Cisco FabricPath network.
    – Loop prevention and mitigation is available in the data plane, helping ensure safe forwarding that cannot be matched by any transparent bridging technology. The Cisco FabricPath frames include a time-to-live (TTL) field similar to the one used in IP, and a Reverse Path Forwarding (RPF) check is also applied.
    • Efficiency and high performance
    – Because equal-cost multipath (ECMP) can be used the data plane, the network can use all the links available between any two devices. The first-generation hardware supporting Cisco FabricPath can perform 16-way ECMP, which, when combined with 16-port 10-Gbps port channels, represents a potential bandwidth of 2.56 terabits per second (Tbps) between switches.
    – Frames are forwarded along the shortest path to their destination, reducing the latency of the exchanges between end stations compared to a spanning tree-based solution.
        – MAC addresses are learned selectively at the edge, allowing to scale the network beyond the limits of the MAC addr

  • Data Center Aggregation/Access SW Nexus

    i have a design scinario for backup email data center, some difficulties faced when trying to match the requirements to Boxes.
    the design required a Nexus 5548UP with addition to 2x Virtualized Data center switches, also it required 12 x CPU license for VM Virtual Network Switch. i suggested to add Nexus 1000 series but the consern is can i use it without adding Nexus 2k . if i have to use N2k and N1k what is the best configuration scinario?

    Hi Shakeeb,
    I don't understand your question very well, but I will try clarify some points.
    You don't need a Nexus 2000 if you have enough ports available in your Nexus 5500, even if you will use nexus 1000v.
    In this scenario what I recommend to you is connect the both Nexus 5548 each other and create a vpc with upstreams routers and downstream blades and storage.
    Richard

Maybe you are looking for