Need helip for data center designing

Sir ,
I am going to design a data center where the following equipments are the
1. one router 7609
2. two core switch (WS-C6509-E)
3. two firewall (WS-C6506-E, with Firewall blade)
4. one VOICE ROUTER (CISCO2821with PVDM2-64, VWIC2-2MFT-T1/E1, PVDM2-32)
5. one Remote Access Server (AS5400XM, AS5000XM 60 Dial Port Feature Card, AS5400 Octal E1/PRI DFC card)
6. two CALLMANAGER-5.1
7. multiple no of Cisco IP Phone 7940G with Video Advantage with VT Camera II
8. one Gatekeeper (2811)
9. one Internet Router (3845)
10. one Authentication, Authorization and Accounting (AAA) System
11. one ISDN RAS 2811 with2-Port Channelized E1/T1/ISDN-PRI Network Module with video conferencing (polycom)
12. one Network Intrusion Detection/ Prevention System (NIDS)
13. one NMS
14. one Content Switch for Server Load Balancing
15. multiple Video Phone
16. lots of sever ( mail. Web, storage, etc )
17. polycom MGC 100
18. polycom 7000
also 20 no of 7206 VXR will be connect with 7609 router through lease line
so.. if u send me some link or some sample design and share some advice where I can gather some idea to design this data center in a proper way
thanks
tirtha

IMO opinion the best place to start is by reading the SRNDs. They can be found here-
http://www.cisco.com/en/US/netsol/ns656/networking_solutions_design_guidances_list.html
Hope that helps.

Similar Messages

  • Data Center Design

    I am looking for a good article about data center design. I need some information about requirements for topics such temperature control, power, security, ventilation, etc.

    The practice of System and Network administration is a great resource for the "other" things that make a data center successful.
    http://www.amazon.com/gp/product/0321492668?ie=UTF8&tag=wwwcolinmcnam-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0321492668
    Another great resource that you should reference is your local power company. PG&E for example will consult with you, and sometimes pay for upgrades to lower your power consumption.
    Here is an article that talks about that in general -
    http://www.colinmcnamara.com/2008/02/22/moving-towards-a-green-data-center-truth-behind-the-hype
    If this is helpful please rate it
    --Colin

  • I need format for data in excel file load into info cube to planning area.

    Hi gurus,
    I need format for data in excel file load into info cube to planning area.
    can you send me what should i maintain header
    i have knowledge on like
    plant,location,customer,product,history qty,calander
    100,delhi,suresh,nokia,250,2011211
    if it is  right or wrong can u explain  and send me about excel file format.
    babu

    Hi Babu,
    The file format should be same as you want to upload. The sequence of File format should be same communication structure.
    Like,
    Initial columns with Characteristics (ex: plant,location,customer,product)
    date column (check for data format) (ex: calander)
    Last columsn with Key figures (history qty)
    Hope this helps.
    Regards,
    Nawanit

  • Ip addressing for data center

    can you suggest me which pool we use for data center public or private,which is best one

    You will encounter conflicts ONLY if you are connecting to a network that is using your same address space. See more below.
    The private IP addresses that you assign for a private network (inter-office LAN, Internet Service Provider customer bases, campus networks, etc) should fall within the following three blocks of the IP address space:
    10.0.0.1 to 10.255.255.255, which provides a single Class A network of addresses, which would use subnet mask 255.0.0.0.
    (theoretically up to 16,777,215 addresses, good for VERY large enterprises like internet service providers or other global deployment)
    172.16.0.1 to 172.31.255.254, which provides 16 contiguous Class B network addresses, which would use subnet mask 255.255.0.0.
    (theoretically up to 1,048,576 addresses, good for large enterprises like colleges and governmental organizations)
    192.168.0.1 to 192.168.255.254, which provides up to 2^16 Class C network addresses, which would use subnet mask 255.255.255.0.
    (theoretically up to 65,536 addresses, widely used by default in consumer/retail networking equipment)
    Explanation of Subnet masks, Network classes, and other technical info is readily available on the internet.

  • Data center design guide

    Hi all,
    does anybody familiar with good design guide of Cisco data center design evolving nexus 2000, 5000 & 7000 with FCoE ?
    thanks,

    Hi all,does anybody familiar with good design guide of Cisco data center design evolving nexus 2000, 5000 & 7000 with FCoE ?thanks,
    Hi ,
    Check out the below link on Data center design with Nexus switches
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/C07-572831-00_Dsgn_Nexus_vPC_DG.pdf
    Hope to Help !!
    Ganesh.H
    Remember to rate the helpful post

  • Data Center Design: Nexus 7K with VDC-core/VDC-agg model

    Dear all,
    I'm doing with a collapsed VDC-core/VDC-agg model on the same chassis with 2  Redundant Cisco Nexus 7010 and a pair of Cisco 6509 used as a Service  Chassis without VSS. Each VDC Core have redundant link to 2 PE based on  Cisco 7606.
    After reading many design document of Cisco, I'm asking  what is the need of a Core Layer in a Data Center especially if it is  small or medium size with only 1 aggregation layer and dedicated for a Virtualized Multi-Tenanted environement? What is driving to have a core layer?
    Thanx

    If your data center is small enough to not require a core, then its fine to run with a collapsed core (distribution + core as the same device).  For a redundant design you need to uplink all your distribution switches to each of your cores.  If you have no cores, then you need full mess at your distribution layer (for full redundancy).
    Lets say you have only 4 distribution pairs...so 8 switches  For full redundancy each one needs uplink to each other.  This means you need 28 total ports used to connect all the switches together (n(n-1)/2).  Thats also assuming 1 link to each device.  However if you had redundant cores, the number of links used for uplinks reduces to 21 total links (this includes links between each distribution switch in a site, and link between the two cores).  So here you see your only saving 7 links.  Here your not gaining much by adding a core.
    However if you have 12 distribution pairs...so 24 switches.  Full redundancy means you have 276 links dedicated for this.  If you add a core, this drops to 61 links.  Here you see the payoff.

  • Wireing Question for Data Center

    I work in what I would consider to be a small/mid sized data center. We use two 6513 as the core/distribution for ~25 racks of servers.
    My question comes in the way of cabling the servers to the core. Currently they are using long patch cords between the 6513 and each server. Well it’s a mess, functional but messy.
    I'm trying to figure out the best way to clean up the mess and make it look professional.
    Most people seem to suggest 2 different ways to accomplish this:
    1) Install switches in each rack and run fiber from the core to the rack. Wire each server to the switch in the rack.
    2) Install 24/48 port patch panels between the core area and the racks.
    I'm wondering what people think of these ideas and if there are any other suggested ways of accomplishing this?
    Andy

    Hi Andy,
    Here's something that we used to do where I worked:
    We had 6509's with three/four 48 port blades servicing between 150 and 200 phones roughly. I had four total switches, one on each of four floors. So this would be roughly similar to your DC environment, only we're servicing longer horizontal runs and phones, not servers -- but the idea is the same (i.e. high density cabling issues).
    Lord knows that when you're plugging in 48 cables into one of those blades, it can get pretty crowded. And since we don't yet know how to alter the laws of physics that determine space requirements, we have to search for alternatives.
    Back to my environment: On three of the four floors, we just wired straight from the patch panel (that ran to floor locations) to the switch. Quite a mess when you're running in 48 cables to one blade! However, this is traditional and this is what we did. My cabling guy (very smart fella) suggested something else. At the time I was too chicken to do it on the other floors, but I did agree to try it on one floor. Here's what we did:
    He ran Cat5 (at the time, that was standard) connections in 48 cable bunches from an adjacent wall into the switch. They had RJ-45 connections so that they could plug in, and they were all nice and neat. On the other end, they plugged in to a series of punch down blocks (kind of like you see in a phone room for telephone structured cabling). These, in turn, were cross connected to floor locations on another punch down block that went to the floor locations. Now, whenever we wanted to make a connection live, we simply had to connect the correct CAT5 jumper wire from one punch down block to the other. You never touch the actual ports in the switch. They just stay where they are. All alterations are done on the punch down blocks. This keeps things nice and neat and there's no fiddling with cables in the switch area. Any time you need to put in a new blade, you just harness up 48 more cables (we called them pigtails) and put them in the new blade.
    NOTE: You could do the exact same thing with patch panels instead of punch down blocks, but with higher densities, it's a bit easier to use the blocks and takes up much less space.
    ADVANTAGES:
    * Very neat cable design at the switch side.
    * Never have to squeeze patch cables in and out.
    * Easy to trace cables (but just better to document them and you'll never have to trace them).
    * Makes moves, adds, and changes (particularly adds) very easy.
    DISADVANTAGES:
    * Not sure that you can do it with CAT6.
    * You have to get a punch down tool and actually punch cables (not too bad though after you do a few).
    * You need to make sure that you don't deprecate the rating on the cable by improperly terminating it (i.e. insufficient twists)
    Anyway, I haven't had a need to do this in a while and I no longer work at the same place, but my biggest concern would be if that meets with the CAT6 spec. Not sure about that, but your cabling person could probably tell you.
    I'm not a big fan of decentralizing the switches to remote locations. It can become cumbersome and difficult to manage if you end up with a lot of them. Also, it doesn't scale well and can end up with port waste (i.e. you have 24 servers in one cabinet on one switch and then along comes 25; you now have to buy another 12 or 24 port switch to service the need with either 11/23 ports going to waste -- not good).
    Good luck. Let us know how you make out. I'd be glad to go in to more detail if the above isn't explained well enough.
    Regards,
    Dave

  • Please shed some light on Data Center design

    Hi,
        I want you guys to recommend what the design should be. I'm familiar with HP blade system. Let me clarify the existing device.
    1. HP Blade with Flex Fabric. It supports FCOE.
    2. MDS SAN switch for the storage
    3. Network Switch for IP network.
    4. HP Storage.
        HP Blade has 2 interface types for IP Network(Network Switch) and Fiberchannel(SAN).
       What is the benifit for using Nexus switch and FCOE for my exising devices. What should be a new design with Nexus switch? Please guide me ideas.
    THX
    Toshi 

    Hi, Toshi:
    Most of these chat boards have become quite boring. Troubleshooting OSPF LSA problems is old news. But I do pop my head in every now and then. Also, there are so many other companies out there doing exciting things in the data center. You have Dell, Brocade, Arista, Juniper, etc. So one runs the risk of developing a myopic view of the world of IT by lingering around this board for too long.
    If you want to use the new B22 FEX for the HP c7000 blade chassis, you certainly can. That means the Nexus will receive the FCoE traffic and leverage its FCF functionality; either separate the Ethernet and FC traffic there, or create a VE-port instantiation with another FCF for multihop deployments. Good luck fighting the SAN team with that one! Another aspect of using the HP B22 is the fact that the FEX is largely plug and play, so you dont have to manage the Flex Fabric switches.
    HTH

  • Control Plane Policing (CoPP) for Data Center

    Hi All,
    I am planning to apply CoPP on different routers and switches of Data Center. This Data Center comprises of Cisco 6513 (VSS), Catalyst 3750, Cisco 3845 and Cisco 2811.
    My question are:
    1. Do we have to apply CoPP on Catalyst 3750, as these are DMZ switches only?
    2. How to find the packet processing rate from router and switches?
    3. Any best practices CoPP template for routers running OSPF and BGP?
    Thanks and Regards,
    Ahmed.

    1. You would need to apply CoPP to all routers/switches that are 
    manageable from untrusted sites. So even if you have non-DMZ switches 
    that will be able to be telneted to from the outside for example, 
    CoPPing them would be helpful for you.Do we not need to apply
    CoPP on switches and routers that are not telneted from outside?
    Control plan traffic is traffic that goes to the control plane of the router like management traffic, snmp etc. If there is a firewall securing you from the outside I would feel my switches are more secure and it is not easy to bring them to their knees with an attacker doing too much from the outside. Control plane policing applies to all control plane traffic, but it is mostly against outsiders that someone would try to protect himself.
    2. "sh proc
    cpu" would give you some  insight for processes like ssh or telnet and
    how much the take. Not  control packet rate processing though.I
    want to know the maximum packet processing rate of a router or switch?
    I don't think you will be able to pull that number.
    3. Depends
    on how powerful the  router is, how many commands you are running, how
    much route processing  is going on.Best practice for a router
    running OSPF with 200 routes?
    Don't know of any.
    PK

  • Making powermac G4 MDD for data center for small office server

    Hi mac expert,
    i had a small office which need to pool every data audio and metadatas ( wav, mp3, flac, mp4, mov, ms office xls, docs, jpeg, tiff ) into 1 place or hard disk that can be access from any macs and pcs in the same office.
    i have 2 units G4 1Ghz MDD (bus speed 133mhz, both has 2 gb of RAM, a 120 gb of HD internal installed.
    i upgrade one of those with DP 1,42ghz and 2TB of SATA hard disk that attached to PCI SATA Sonnet card.
    installed mac osx 10.5.8 leopard.
    airport extreem card installed.
    my questions:
    - How big a hard drive can I get? I installed a 2TB drive with no partion, but sometimes when i open the disk it gave me spinning cursor and it show no items on that drive, got freeze and can not turn of by shutdown menu, i have to press the power button of the mac itself.
    - if i want to make a data center like what i mention above using powermac G4 that i had, what should i do?
    - should i install leopard server os?
    thank you so much.

    MacDrive may work, also see if these are still avaiable...
    NTFS-3G Stable Read/Write Driver...
    http://www.ntfs-3g.org/
    MacFUSE: Full Read-Write NTFS for Mac OS X, Among Others...
    http://www.osnews.com/story/16930
    MacDrive for the PCs... allows them to Read/Write HFS+...
    http://www.mediafour.com/products/macdrive/

  • Need links for data structure and algorithms.

    Hi.
    I am just new to java but need to learn data structure and algorithms.
    Do your guys got any good links or bbs to learn?
    Thanx in advance

    http://www.amazon.com/exec/obidos/tg/detail/-/1571690956/ref=cm_huw_sim_1_3/104-7657019-1043968?v=glance
    http://www.amazon.com/exec/obidos/tg/detail/-/0534376681/ref=cm_huw_sim_1_4/104-7657019-1043968?v=glance
    http://www.amazon.com/exec/obidos/tg/detail/-/0672324539/ref=cm_huw_sim_1_2/104-7657019-1043968?v=glance
    http://www.amazon.com/exec/obidos/tg/detail/-/0201775786/qid=1060946080/sr=8-1/ref=sr_8_1/104-7657019-1043968?v=glance&s=books&n=507846
    $8 for the first

  • Need suggestion for data encryption

    Hello Experts,
    I need your expert opinion on one of the data encryption method. We have some legal compliance to implement data encryption as listed below, lets say we have to apply encryption on 2 tables (1) TAB_A (2) TAB_B.
    (1) Need data encryption on the TAB_A & TAB_B for 2-3 columns and not the entire table.
    (2) Data should not be in readable format, if anyone connect to database and query the table.
    (3) We have reporting services on our tables but reporting services doesn't connect to our schema directly rather they connect to a different schema to which we have given the table Select grant.
    (4) Reports should work as it is, and users should see the data in readable format only.
    (5) There are batch processes which generates the data into these tables and we are not allowed to make any changes to these batch processes.
    This is a business need which has to be delivered. I explored various options such as VPDs, Data encryption methods etc but honestly none of these are serving our business need. There is also a limitation of encrypting data as data volume of quiet high (30TB DB) and generally users query the data on millions of records at a time. Also reports have very tight SLAs as well. If we create any encryption wrapper then decrypt will take longer in reports and will cause the SLA miss for reports.
    Could someone please suggest any better solution to me or if something is inbuilt in Oracle? We are using Oracle 11g.
    Regds,
    Amit.

    you can read about Transparent Data Encryption
    Check
    http://docs.oracle.com/cd/B28359_01/network.111/b28530/asotrans.htm

  • I need options for data replication within production db and dimensional db

    Hi,
    I'm looking for options on how to solve this issue. We've 2 databases, one is our production, operative database, used by around 400 users at a time, and another one, which is our dimensional model of the same info, used to obtain reports. We also have a lot of ETL's (extract, transform and load) processes running every night to update the dim model.
    Mi problem is that we have some online reports, and nowadays, we're getting data from the operational database, causing a performance issue in online operations. We want to migrate this reports to the dimensional model, and we're trying to find the best options for doing this.
    Options that we're considering are ETL's process running continuously every XX minutes, materialized views, ETL's on demand, and others.
    Our objective is to minimize performance issues on transactional database.
    We're using Oracle 8i (yes, the oldie one) and Reporting Services as report engine (reports just run a pkg to get data).
    Any option is welcome.
    Thx in advance.
    Regards,
    Adrian.

    The best option for you if the performance is the
    most important is ORACLE STREAMS. Also is the most
    complex but the final results are very goodsAgreed. As User12345 points out, though, that requires Oracle 9.2 or higher.
    Another option is the materialized views with Fast
    Refresh , that need the materialized view logs in
    the master site.
    The first load is expensive but if you refresh each
    15 minutes the cost is not high.I'd be careful about making that sort of statement. The overhead of both maintaining materialized view logs (which have to be written to synchronously with the OLTP transactions and which impose an overhead roughly equivalent to a trigger on the underlying table) and doing fast refreshes every 15 minutes can be extensive depending on the source system. One of the reasons that Streams came about was to limit this overhead.
    For refresh i execute a cron shell that run the
    DBMS_MVIEW.REFRESH package. my experience with group
    refresh not was goodWhat was your negative experience with refresh groups? I've used them regularly without serious problems. Manual refreshes of individual materialized views against an OLTP system would scare the pants off me because you'd inevitably end up with transactionally inconsistent views of the data (i.e. child records would be present with no parent record, updates that affect multiple tables would be partially replicated until the next refresh, etc). Trying to move that sort of inconsistent data into a different data model, and trying to run reports off that data, would seem highly challenging at a minimum. Throwing everything into a single refresh group so that all the materialized views are transactionally consistent, or choosing a handful of refresh groups for those tables that are related to each other, seems like a far easier way to build a system.
    Justin

  • Need help for coding and designing

    Hi All,
    Problem Statement
    A financial institution is in the verge of automating their existing “customer data management” system. Currently they are maintaining their
    customer data in a comma separated file (CSV) format. You, as a software consultant have been requested to create a system that is flexible,
    extensible and maintainable. Your proposed system should also take care of moving the data from the old system to the new one.
    Customer has following specific requirements
    1.This is an internal offline application and need not be a web based solution
    2.The new application should be cost effective
    3.The application should have good response time as compared to the ones suggested by other vendors
    4.Phase 1 work should concentrate mainly on building a simple system which moves the data from CSV to the new one and display the
    moved data on the screen.
    Sample CSV file containing customer data
    <first name>, <age>, <pan number>, <date of registration>
    Ramesh, 28, JWSLP1987, 30/01/2006
    Rajesh, 32, POCVT2087, 23/10/2005
    Shankar, 39, TRYUP3945, 24/7/2003
    Shyam, 45 BLIWP5612, 15/3/2004
    Requirements for coding DOJO
    •Ensure that your solution is built around open source frameworks and tools to avoid licensing issues. (Ex: Eclipse, MySQL, etc)
    •Please give me any idea to write code for this issue.
    Thank you.
    Edited by: user636482 on Oct 28, 2008 2:03 AM

    The first requirement is to NOT use software like "Oracle".
    The specifications are to use a CSV file as the "data source". The case asks you to "move" the data from CSV to "the new one" and display the moved data.
    So this sounds more like a data load routine -- but not to load into Oracle.
    You are in the wrong forum.

  • Need help for date calculation

    I HAVE STRING COLUMN SAY "TIME_USED" WITH 'HOURS:MINS:SECONDS' (eg 153:29:41) FORMAT
    I NEED TO CALCULATE IT TYPICALLY AND STORE IT IN SOME OTHER VARIABLE PLEASE CHECK;
    --> (MINS.SECONDS)/60 ie (29.41)/60=0.4901
    and concatinate this value to HOURS ie 153.4901
    how to do this, i cant use like operator and get the data and do in one go for sql, can any one help me out
    Regards
    Naren

    Like this
    with t
    as
    select '153:29:41' str from dual
    select trunc(hr+(mn/60),4) val
      from (
    select to_number(regexp_substr(str,'[^:]+',1,1)) hr,
           to_number(regexp_substr(str,'[^:]+',1,2) ||'.'|| regexp_substr(str,'[^:]+',1,3)) mn
      from t)

Maybe you are looking for