Storage Switch Recommendation

SAN fabric switches- Which SAN switch would you recommend that will support both ISCSI and Fiber channel with additional fiber port greater then 25 on each switch. Customer currently has 4 16 port Cisco Switches he's looking to replace.

The reason that the 9124 has such a great price point is that Cisco developed this totally buttkicking chipset on the backend to hand the fibre channel traffic at high speed, instead of the brocade way of chaining a whole bunch of chipsets together and trying to cover it up with buffer memory. In the end this results in major cost savings which it looks like Cisco is passing on to the consumer.
Now, I would venture to guess that ISCSI is not in the current capabilities of that chipset.
Also, it kind goes against the segment where I see the 9124 marketed to. I see the target market of the 9124 to be the smaller storage implementations where the storage "network" aspect is almost an afterthought.
While multi protocol access such as ISCSI is nice for these segments, the cost to provide this in the switches I think would price Cisco out of the market in this segment.
--Colin

Similar Messages

  • Nexus as a Storage Switch?

    Hi Guys
    With the new Unified Ports Concept available in Nexus 5K can we say that  Nexus 5K is as good a storage switch(one to which SAN boxes can be directly connected)  as MDS 9124/9148...
    There is no clarity on this and there is lot of speculation in the channel community over this.Would help if Cisco comes out with a response to dispel the doubts
    I also see a Nexus Advanced SAN License for the Nexus 7K?Does this mean that N7K can also be used as MDS 9500?
    Thanks
    Sumesh

    The N7K also has FC switching code but only for FCoE connected devices. The N7K doesn't support an FC 4/8G SFP, you could however connect SAN FCoE front-end ports and serves with CNAs to the N7K and do all zoning there.
    I guess it would be possible to connect some 5548s over FCoE to the N7K and use it as your FC core with the N5Ks as your FC edge. All SAN related configs and FCoE devices must be connected to a dedicated FCoE VDC on the N7K and you must have an F1/F2 line card to support FCoE.
    An MDS 9500 with the FCoE card could also connect to the N7K for FC switching. For example if your FC core is a 9500 you could connect up hosts with CNAs and SAN array with FCoE front-end ports to the N7K and then all of the legacy traditional FC connects would stay connected to the 9500. The 9500s and N7K could then connect together with the 9500 FCoE card.
    Lots of cool ways to unify LAN/SAN these days. All depends on what you have now and what the long term plans are.
    The biggest challenge I have faced with trying to get FCoE into customers is political. With FCoE who owns the switch? LAN or SAN team? Sure you can use RBAC to give SAN team access to only FC switch but still there are political hurdles to get over before FCoE becomes main stream.

  • KEYNOTE DIFFERENCES BETWEEN A CATALYST LAN SWITCH & A STORAGE SWITCH (MDS)

    Hi Guys,
    I had a very simple query. I had a very basic query. I wanted to know the difference between a switch which we connect to our campus netorks and switches connected to storage area networks. I dont mean the cost and stuff, but more into how they forward packets and the technologies. If I have a catalyst switch with 10Gb ports and a SAN switch with 10Gb ports , what would be the performance difference. what would be the difference in their forwarding mechanisms.

    Hi Rustom,
    The big difference is that the 10Gb ports on the Catalyst are ethernet and the 10Gb ports on the MDS switch are for Fibre Channel only.  You can't use the MDS ports for ethernet.
    Jim

  • Switch recommendation, to add more wired ports to network

    I have a home network that goes from a broad band modem to an Apple AirPort Extreme.  I use the wireless for our iPhones and iPads.  However, ive wired the house so that my computers, NAS, and Apple TV are all wired.  It works great -Time machine is much faster, and the microwave doesn't interfere with Apple TV.  My only issue is that I don't have enough ports to connect everything at once.  I thought I'd just get a Gigabit switch to connect the computers and Apple TV to (8 port?), and then connect that to the Extreme.  So the extreme will still be the firewall out to the cable modem. 
    Any recommendations on what switch to buy?  I want to get something reliable and fast... But don't want to go overboard.  I was looking at a netgear, but got confused between 802.11 and 802.3a?  I also got confused about PoE.  I don't plan on hooking any phones to it, so I don't think I need PoE... But if I bought a switch with PoE, would that cause issues?  Any advice is appreciated.  Thanks.

    Hi Bob & edex67,
    Thanks for the quick responses.  I'm glad you both recommend Netgear... I'm looking at two models and would love your recommendations:
    GS108  (http://store.netgear.com/store/netgear/en_US/pd/ThemeID.17545600/productID.12452 9400/parentCategoryID.44216100/categoryId.44398900)
    GS608
    http://eshop.macsales.com/item/Netgear/GS608/
    They are both 8 port Gigabit switches, but I can't tell really what the difference would be between them?  Is it just the case?  I'm sure the GS108 is more durable (dropping and such) since it has a metal case, but am I missing something else? 
    Thanks again for your help!
    -Ryan

  • Campus LAN Access Switch recommendation

    Hi all,
    I am looking at the specs of 2960X switches and 3750v2 switches as possible replacements for some old 3750 switches which are approaching End OF Support.
    Am I right in understanding that the performance (both packet switching & backplane bandwidth) is better on the 2960X's than the 3750v2's? Although it looks like the 3750v2's are a lot more feature rich and also have dCEF.
    The datacheets for the 2960X report 80gig Stacking bandwidth, and 216Gbps backplane bandwidth and at least 70mpps whereas the 3750v2's are only 32gig switching fabric bandwidth and a maximum forwarding rate of 13mpps! Is there something I am missing here??
    I have no idea of costs, but just looking at getting the best value for money out of our Access Switches.
    The 3650's and 3850's look good too but I imagine they are pretty costly compared to the 2960's and I do not think we need integrated WLCs in our access switches as the AP's we have in our building are minimal.
    Any advice appreciated!
    Thanks
    Mario

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    BTW, you realize, 3750v2s are end-of-sale?
    Correct, a 2960X might have higher fabric bandwidth and PPS ratings than a 3750v2, but that doesn't mean it's faster or better.  For fabric bandwidths and PPSs, you need to look at the needs of the ports on the device.
    A 3750v2 with 48 copper FE ports, and 4 SFP gig ports, has 8.8 Gbps of port bandwidth.  So a non-blocking fabric needs to support 17.6 Gbps.  As you note, the 3750v2 fabric is listed as 32 Gbps, so you're covered there.
    The same 8.8 Gbps of port bandwidth needs up to 13.0944 Mpps (1.488 Mpps per gig) for wire-rate for minimum size Ethernet.  Your noted 13 Mpps seems to cover that too.
    So, basically, a 3750v2 switch is wire-rate capable.
    When you get into stack bandwidth, even more that other switch parameters, there's lies, dam lies, and device performance specifications.  Trying to judge one stack architecture against the other, gets very complicated very quickly.
    On the 2960 series, I believe Cisco is "adding" each switch to switch ring link to an aggregate total.  In an ideal situation, if traffic only needed to go from switch 1 to switch 2, and from switch 2 to switch 3, then the aggregate summation does have a bandwidth advantage over StackWise "bus" like ring usage.  If traffic needs to go from between all 3 switches, traffic to from switch 1 to switch 3 will need to share the bandwidth also being used by traffic from switch 1 to switch 2.
    I.e. a 2960 80 Gbps doesn't mean you get 80 Gbps between just two switches, or the advantage of all 80 Gbps found within a maximum member 2960 stack.
    (As an aside, compare StackWise vs. StackWise Plus.  The latter has twice the physical bandwidth, but it also operates much "smarter".  Again, unwinding how stacks work, and their impact to your needs, is complicated.)
    There's also more to a switch's performance than raw bandwidths and PPS rates.  The switch's architecture, and other switch specifications, can make a big difference in real-world performance.  You'll find 3560/3750s with the fabric bandwidths and PPS rates same as some 49xx switches, but the latter often will deal with busy servers much, much better, due to different port buffering.
    All the above, also means, without some real analysis of both your needs and devices being considered, anyone's recommendations should be taken with a large grain of salt; including mine.  ;)
    That said, for simple L2 edge port usage, the less expensive 2960 series might be just fine for you.  If you want to reduce costs even more, you might also look at Cisco's SMB switches, some I think are also now stackable.

  • NIC teaming and Hyper-V switch recommendations in a cluster

    HI,
    We’ve recently purchased four HP Gen 8 servers with a total of ten NICS to be used in a Hyper-V 2012 R2 Cluster
    These will be connecting to ISCSI storage so I’ll use two of the NICs for the ISCSI storage connection.
    I’m then deciding between to options.
    1. Create one NIC team, one Extensible switch and create VNics for Management, Live Migration and CSV\Cluster - QOS to manage all this traffic. Then connect my VMs to the same switch.
    2. Create two NIC teams, four adapters in each.  Use one team just for Management, Live Migration and CSV\Cluster VNics - QOS to manage all this traffic. 
    Then the other team will be dedicated just for my VMs.
    Is there any benefit to isolating the VMs on their own switch?
    Would having two teams allow more flexibility with the teaming 
    configurations I could use, such as using Switch Independent\Hyper-V Port mode for the VM team? (I do need to read up on the teaming modes a little more)
    Thanks,

    I’m not teaming the ISCSI adapters.  These would be configured with MPIO. 
    What I want to know,
    Create one NIC team, one Extensible switch and create VNics for Management, Live Migration and CSV\Cluster - QOS to manage all this traffic. Then connect
    my VMs to the same switch.
    http://blogs.technet.com/b/cedward/archive/2014/02/22/hyper-v-2012-r2-network-architectures-series-part-3-of-7-converged-networks-managed-by-scvmm-and-powershell.aspx
    What are the disadvantages to having this configuration? 
    Should RSS be disabled on the NICs in this configuration with DVMQ left enabled? 
    After reading through this post, I think I’ll need to do this. 
    However, I’d like to understand this a little more.
    I have the option of adding an additional two 10GB NICS. 
    This would mean I could create another team and Hyper-V switch on top and then dedicate this to my VMs leaving the other team for CSV\Management and Live Migration.
     How does this option affect the use of RSS and DVMQ?

  • ISCSI storage solution Recommendation ?

    Hi I am trying to understand where exactly does iSCSI fit in the storage solutions.
    is iSCSI recommended ?
    My interest:
    I would like to provide our users, access to our mass storage, provided by, Sun Fire V890 via 6130 Storage Arrays.
    Our users working from their windows box would like to have their mass storage directories mounted as their network drive. Does iSCSI software initiative or hardware HBA's fit into these requirements ???
    Can anyone please provide any form of guidance to develope an understanding of the requirements.
    thank you!

    Hi,
    Depending on your situation iSCSI might be a low-cost replacement of a Fibre Channel SAN.
    Both are block-level storage solutions. That is, the protocols access devices by reading/writing blocks of raw data not knowing what file they belong to.
    The advantages of iSCSI are that it communicates over IP and as such is even able to use the internet for connecting servers and storage placed in a remote location (FC is able to do that too but requires extra, expensive, routing hardware).
    Furthermore, if it is about connecting local storage to a server iSCSI can use a basic UTP cabling based ethernet infrastucture which is a lot cheaper than an FC infrastructure. One should however keep in mind that iSCSI requires a separate network to be able to perform well, so connecting iSCSI devices over a company network that is used for other communications too is not a good idea.
    To get good performance an iSCSI ethernet infrastructure should at least be Gigabit based and should be redundant to get availability.
    FC SAN's are definetly the ones with the better performance. This is due to the infrastructures which are running at 4Gbit/s (most common in recent setups). iSCSI is mostly implemented based on a 1 Gbit/s infrastructure.
    As said, both FC and iSCSI are block-based. If you need the storage to replace a fileserver both are not the right solation. In that case you need a NAS (network attached storage) solution which are able to get you file-sharing capabilities right away (CIF, NFS etc.).
    (iSCSI and/or FC can be behind the NAS-device as the actual storage, that would involve so called gateways).
    An issue that always comes up using iSCSI is that it is not fully vendor independant. The server to connect have to run a so called iSCSI Initiator and, to avoid problems, only those initiators can be used that are certified by the storage vendor. Furthermore the iSCSI initiator takes quite a lot of CPU power of the server to connect so the server should have a lot of CPU power left to handle it. There is a solution for this: using rather expensive networkcards that have an initiator implemented in hardware (so called iSCSI HBA's, or TOE= TCP Offload Engines, HBA's allow for booting from iSCSI too) but besides of being expensive these initiators need to be on the storage vendor's certified list too.
    So iSCSI is not as open as it seems to be in the first place (Things are getting better however).
    FC is not free of all these compatibiliy problems. In fact, depending on the storage vendor you chooce, there are several compatibiltyy matrices to be met in order for the total solution to be certified. You should leave the design of a FC SAN to a single specialist who knows were to find and how to use these matrices.
    My advise would be to first decide on the budget (based on business requirements), than decide on wether file or block access is needed from the storage device and what OS-platforms are to connect, and decide on the performance you need.
    This would lead you to the right choice for storage technology.
    Regards,
    Willeon

  • Access Switch Recommendation for ISE

    Dear Expert,
    My client is thinking to role out the ISE in the network they have old switches 2950g which supports only authentication with ISE.
    In order to utilize most of the feature if not all we are thinking to replace them with 2960g.
    Any advice, expereinvce in this regards will be very helpful for the recommended model for the access layer.
    Thanks

      You can use any of the L2 switches  for 802.1x authentication and if you want L3 features you should use the L3 switches  for reference you can check the list of features and IOS  form the following link
        http://www.cisco.com/en/US/solutions/collateral/ns170/ns896/ns1051/product_bulletin_c25-712066.html
        http://www.cisco.com/en/US/solutions/ns170/ns896/ns1051/trustsec_matrix.html
    and hardware compliance from the list for all ISE versions
         http://www.cisco.com/en/US/products/ps11640/products_device_support_tables_list.html

  • Request for Fiber Switch Recommendation

    I'm looking for a fiber switch to connect 8-12 computers via multi-mode @ 100mb.  I also need the switch to act as a DHCP server.  I'm looking at WS-C3750G-12S-S.  I think this will do what I need.  So here are my questions:
    Does this model require me to purchase seperate SFP transceivers or are they included?
    Is there a cheaper solution?
    Thanks

    100BaseFx part number is GLC-GE-100FX and you can get this working on a 3750G-12S (if you can still buy this model).
    Yes, the GLC-GE-100FX is a separate line item and NOT bundled.
    The same switch can also act as a DHCP server.

  • Gigabit Switch Recommendations for CCTV

    Hi Mpk,
                   Thank you for all your detailed responses. I have two NVR having 40 cameras in each. Mainly, I need to install GBps switches for quick loading of recorded clips which is currently taking ages to download from my rackmount NAS drives.

    How many cameras are you deploying within the GbE network? 
    For network budgeting purposes I like to use 5MB/s per HD camera as an estimate for bandwidth requirements. That will give you about 25 HD cameras sending out raw video to your NVR. Now almost all cameras don't send out raw video, but use some type of video codec instead. 
    So again the question is how many cameras do you plan on supporting. 
    Also a 10GbE uplink, you must be planning on a many cameras, I doubt that a single NVR can move that much data to disk. 

  • Storage Subsystem Recommendations for DB: How many IOPs?

    Hi,
    I was wondering how to determine if a certain storage subsystem or setup is sufficient for our workload. Running the application itself doesn't really help because we currently aren't under a load (the load is very light). However, I did run some disk benchmarking tools simulating random 8K reads and writes (75% writes and 25% reads since that's what our workload looks like). The result was about 380 IOP/s and about 3MB/s. Note that this is random 8K I/O and not sequential. So is this any good? I feel it's kind of low. What type of I/O and IOPs are you getting for your Oracle DB and for what kind of load? Thanks for your help.

    I am not sure what simulation tool you are talking about to generate certain number of IOPs.
    The best way to simulate your system usage is by using tools like Mercury LoadRunner or Borland's Silk Performer. These tools allow you to simulate system load in terms of actual application users. E.g. You can breakdown your expected system usage in terms of number of concurrent users using different operations (modules of the applications) concurrently.
    Say you have financial application - 100 concurrent users
    out of 100 users 10 are processing vouchers, 10 are processing vendor information 40 are processing general ledgers, 20 are doing account receivables and 20 are doing account payables.
    You can further break it down, say for 8 hours of normal work day, in terms of how many vouchers are processed, total number ledger entries, vendor queries/updates so on and so forth.
    Once you feed (setup) this kind of scenario (which can be changed programmatically and dynamically in terms of number of concurrent users of the module or number of iteration performed by these virtual users) you can measure the system/DB performance in terms of IP/CPU utilization and decide if your storage meets the requirement of specific response time.
    Typically for best performance I would go for RAID 10 (these days storage is cheap). RAID 10 gives you both performance as well as redundancy (mirror and stripped).
    Hope it helps.

  • Query: Best practice SAN switch (network) access control rules?

    Dear SAN experts,
    Are there generic SAN (MDS) switch access control rules that should always be applied within the SAN environment?
    I have a specific interest in network-based access control rules/CLI-commands with respect to traffic flowing through the switch rather than switch management traffic (controls for traffic flowing to the switch).
    Presumably one would want to provide SAN switch demarcation between initiators and targets using VSAN, Zoning (and LUN Zoning for fine grained access control and defense in depth with storage device LUN masking), IP ACL, Read-Only Zone (or LUN).
    In a LAN environment controlled by a (gateway) firewall, there are (best practice) generic firewall access control rules that should be instantiated regardless of enterprise network IP range, TCP services, topology etc.
    For example, the blocking of malformed TCP flags or the blocking of inbound and outbound IP ranges outlined in RFC 3330 (and RFC 1918).
    These firewall access control rules can be deployed regardless of the IP range or TCP service traffic used within the enterprise. Of course there are firewall access control rules that should also be implemented as best practice that require specific IP addresses and ports that suit the network in which they are deployed. For example, rate limiting as a DoS preventative, may require knowledge of server IP and port number of the hosted service that is being DoS protected.
    So my question is, are there generic best practice SAN switch (network) access control rules that should also be instantiated?
    regards,
    Will.

    Hi William,
    That's a pretty wide net you're casting there, but i'll do my best to give you some insight in the matter.
    Speaking pure fibre channel, your only real way of controlling which nodes can access which other nodes is Zones.
    for zones there are a few best practices:
    * Default Zone: Don't use it. unless you're running Ficon.
    * Single Initiator zones: One host, many storage targets. Don't put 2 initiators in one zone or they'll try logging into each other which at best will give you a performance hit, at worst will bring down your systems.
    * Don't mix zoning types:  You can zone on wwn, on port, and Cisco NX-OS will give you a plethora of other options, like on device alias or LUN Zoning. Don't use different types of these in one zone.
    * Device alias zoning is definately recommended with Enhanced Zoning and Enhanced DA enabled, since it will make replacing hba's a heck of a lot less painful in your fabric.
    * LUN zoning is being deprecated, so avoid. You can achieve the same effect on any modern array by doing lun masking.
    * Read-Only exists, but again any modern array should be able to make a lun read-only.
    * QoS on Zoning: Isn't really an ACL method, more of a congestion control.
    VSANs are a way to separate your physical fabric into several logical fabrics.  There's one huge distinction here with VLANs, that is that as a rule of thumb, you should put things that you want to talk to each other in the same VSANs. There's no such concept as a broadcast domain the way it exists in Ethernet in FC, so VSANs don't serve as isolation for that. Routing on Fibre Channel (IVR or Inter-VSAN Routing) is possible, but quickly becomes a pain if you use it a lot/structurally. Keep IVR for exceptions, use VSANs for logical units of hosts and storage that belong to each other.  A good example would be to put each of 2 remote datacenters in their own VSAN, create a third VSAN for the ports on the array that provide replication between DC and use IVR to make management hosts have inband access to all arrays.
    When using IVR, maintain a manual and minimal topology. IVR tends to become very complex very fast and auto topology isn't helping this.
    Traditional IP acls (permit this proto to that dest on such a port and deny other combinations) are very rare on management interfaces, since they're usually connected to already separated segments. Same goes for Fibre Channel over IP links (that connect to ethernet interfaces in your storage switch).
    They are quite logical to use  and work just the same on an MDS as on a traditional Ethernetswitch when you want to use IP over FC (not to be confused with FC over IP). But then you'll logically use your switch as an L2/L3 device.
    I'm personally not an IP guy, but here's a quite good guide to setting up IP services in a FC fabric:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/4_1/configuration/guides/cli_4_1/ipsvc.html
    To protect your san from devices that are 'slow-draining' and can cause congestion, I highly recommend enabling slow-drain policy monitors, as described in this document:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/5_0/configuration/guides/int/nxos/intf.html#wp1743661
    That's a very brief summary of the most important access-control-related Best Practices that come to mind.  If any of this isn't clear to you or you require more detail, let me know. HTH!

  • Storage Groups

    Hello,
    When booting from SAN in UCS, what's the best practice when creating the Storage Groups in the disk array?
    For instance, VMware: is it best-practice to have one storage group for each ESXi and add its own ESXi Boot LUN (id=0) plus the VM datastore LUNs needed?
    Do other environments (Linnux, Windows, Hyoer-V) have anything special in these terms with which to take care of?
    Thanks,

    It's a security issue.  Because the LUN ID can be changed easily on the host, you could essentiall clobber the wrong LUN if your server admin mistakenly changed the LUN ID.  Also, once a host is booted up, it will be able to see & access every LUN within the storage group regardless of LUN ID.  The significance of the LUN ID matches the host only impacts a host trying to SAN boot.
    The two main forms of security enforced in Storage is Zoning and Masking.
    Zoning - done on the storage switch, acts like an ACL limiting the scope of what a zone's members can see.  A zone will normally only contain one host and the target WWNs.  **Who can I see**
    Masking - done on the storage array limits "what" LUNs a host has access to.  This is done in the form of Storage groups.  **What can I access**.
    Circumventing either poses a great risk at data corruption/destruction since various operating systems can only read their native file systems.  Ex. If you had all your hosts in one storage group (ESX, Windows etc) and tried to only separate them by a LUN ID, a simple 1-digit change of the boot target LUN ID on the initiator could cause a host to not read the filesystem and potentially right a new signature to the risk - overwritting your existing data.  Windows can't read a linux partition and vice-versa.
    Follow these best practices and your data will be much safer & secure.
    Regards,
    Robert

  • ISCSI traffic running across Cisco 3750 Switches

    My customer has a small shop with 2 servers runnig iSCSI to a SAN device. They are looking for a switch recommendation and I would like to use a pair of Cisco 3750's, to take advantage of the VSS technology for redundancy,L3 and some other Core requirements, but I am concerned about performance.
    I thought my other option is to use 3750E's but concerned about the added costs.

    The fabric and pps ratings for the 3750Gs don't support wire-rate for more than 16 gig ports. (Max performance for 3750G models is 38.7 Mpps and 32 Gbps fabric; for 3750-E it's 101.2 Mpps and 128 Gbps fabric [NB: pps is enough, slightly insufficent fabric bandwidth for 48 port models - similar 4948 offers 102 Mpps, 136 Gbps].)
    Another performance limitation of the 3750s (and to lessor extent the 3750-Es) is stack ring bandwidth. As best I can tell, the 32 Gbps is really dual 8 Gbps duplex (dual 16 Gbps duplex for -Es). An important distinction between the original StackWise technology and the later StackWise+, the former puts a copy of all traffic on the stack, the latter suppresses unnecessary unicast. The former also requires the sender to remove the traffic from the stack ring, the latter the destination removes the traffic. (I.e. the "+" technology, really is plus.)
    For really, really demanding performance, a stack ring isn't the same as a chassis fabric (e.g. 4500s), and within a single switch, the lower end switch models, they can't always provide wire-rate for all their ports. However, the real question is whether you need this performance in a small shop even though iSCSI is being used.
    In other words, its rare to see all ports demanding full bandwidth, so a stack of 48 port 3750Gs migtht work just fine for your customer if the actual need doesn't require more than the device can supply.
    In similar situations, I present the customer with such facts. Based on what the expected load is, device "A" might work fine, but it can't guarantee performance beyond a certain level. If customer wants the capability for more performance, for growth or "just to be safe", can do too, here's your options (and extra cost) for that too.
    BTW, if SAN devices can support 10gig, then you'll need something better than the 3750G since the model with a single 10gig port has been discontinued.

  • Will RAC's performance bottleneck be the shared disk storage ?

    Hi All
    I'm studying RAC and I'm concerned about RAC's I/O performance bottleneck.
    If I have 10 nodes and they use the same storage disk to hold database, then
    they will do I/Os to the disk simultaneously.
    Maybe we got more latency ...
    Will that be a performance problem?
    How does RAC solve this kind of problem?
    Thanks.

    J.Laurence wrote:
    I see FC can solve the problem with bandwidth(throughput),There are a couple of layers in the I/O subsystem for RAC.
    There is CacheFusion as already mentioned. Why read a data block from disk when another node has it in is buffer cache and can provide that instead (over the Interconnect communication layer).
    Then there is the actual pipes between the server nodes and the storage system. Fibre is slow and not what the latest RAC architecture (such as Exadata) uses.
    Traditionally, you pop a HBA card into the server that provides you with 2 fibre channel pipes to the storage switch. These usually run at 2Gb/s and the I/O driver can load balance and fail over. So it in theory can scale to 4Gb/s and provide redundancy should one one fail.
    Exadata and more "+modern+" RAC systems use HCA cards running Infiniband (IB). This provides scalability of up to 40Gb/s. Also dual port, which means that you have 2 cables running into the storage switch.
    IB supports a protocol called RDMA (Remote Direct Memory Access). This essentially allow memory to be "+shared+" across the IB fabric layer - and is used to read data blocks from the storage array's buffer cache into the local Oracle RAC instance's buffer cache.
    Port to port latency for a properly configured IB layer running QDR (4 speed) can be lower than 70ns.
    And this does not stop there. You can of course add a huge memory cache in the storage array (which is essentially a server with a bunch of disks). Current x86-64 motherboard technology supports up to 512GB RAM.
    Exadata takes it even further as special ASM software on the storage node reconstructs data blocks on the fly to supply the RAC instance with only relevant data. This reduces the data volume to push from the storage node to the database node.
    So fibre channels in this sense is a bit dated. As is GigE.
    But what about the hard drive's reading & writing I/O? Not a problem as the storage array deals with that. A RAC instance that writes a data block, writes it into storage buffer cache.. where the storage array s/w manages that cache and will do the physical write to disk.
    Of course, it will stripe heavily and will have 24+ disk controllers available to write that data block.. so do not think of I/O latency ito of the actual speed of a single disk.

Maybe you are looking for

  • Envy 17 3D 2090NR: Compatible wireless minicard/module with GPS, preferably 4g (or 3G)?

    We're looking for a compatible internal wireless minicard/module with GPS and preferably 4g for a series of Envy 17 3d 2090NRs. We also NEED to maintain WiFi, but on the Envy 17 3D I believe the Intel WIFI is integrated on the mobo... If not, we'll n

  • Change Background Attribute of Body Element?

    hi, i want to change the backgound attribute of body tag i am not able to do it with setCharacterAttributes method but i could do it for other elements which are leaf eg Image,etc, SetCharacterAttribute method states that the getStartOffset method wi

  • Regarding chossing the Business Partner Master Data

    Dear All,              I have one query regarding chossing the Business Partner Master Data. Let take an exapmle  suppose 2 sales person are making the sales Quotation and every sales Persons only making the quotaion for the specified leads or custom

  • Service stuck, can't be disabled or stopped.

    Hi, I started the MySQL service but it's been stuck at 'starting up' since. The stop button is unavailable and i can not disable the service. Once i disable the service and click 'save', it pops back as active service again. Restarts don't help and i

  • Copying website files into Dreamweaver to work on...

    A basic question whose answer I'm having a brain freeze on. Need to pick a site to work on for a Dreamweaver class. How would I access all the files that make up a site and then copy them over to Dreamweaver to work on them locally. View source? HTML