San/storage design query Netapp + SAN

I have a design query that I'm hoping someone can help with
We need to provide the network connectivity to some netapp storage (ethernet based) and:
30 X 10gb
50 X 1gb servers.
I have had a lot of issues trying to find cisco 10gb Ethernet switches to match the above requirements. The new sfp+ twinax cabling seems to be
the only thing that is available and this is no good for serve connections.
I have settled on the Nexus 3064 interlinked with its 48 ports of 10gb fibre capacity for the servers that require 10gb and Cisco 3750x series
switches for the 1gb Servers. The 3750 provide the x-stack port channelling too and will provide the layer 3 connectivity also.
Is this a good design and will this work?
Any advice will be greatly appreciated.
Thanks
Sam

Jonathan See wrote:I have a design query that I'm hoping someone can help withWe need to provide the network connectivity to some netapp storage (ethernet based) and:
30 X 10gb
50 X 1gb servers.
I have had a lot of issues trying to find cisco 10gb Ethernet switches to match the above requirements. The new sfp+ twinax cabling seems to be
the only thing that is available and this is no good for serve connections.
I have settled on the Nexus 3064 interlinked with its 48 ports of 10gb fibre capacity for the servers that require 10gb and Cisco 3750x series
switches for the 1gb Servers. The 3750 provide the x-stack port channelling too and will provide the layer 3 connectivity also.
Is this a good design and will this work?
Any advice will be greatly appreciated.
Thanks
Sam
Sam.
It'll work, but I question why you need the 3750X switches in this at all.
The 3064 has 48 1/10 gig fixed ports giving a total of 96 1 or 10 gig ports (depending on tranciever). You only need 80. You can use the quad SFP ports to link the 3064's at layer 2, giving you up to a 160 gig backbone link between them, and then just connect your servers (be they 1 gig or 10 gig) across the two 3064's, or alternately use two of the quad ports (80 gig) to link the 3064's and use the other 2 to provide 8 extra 10 gig ports per 3064 unit which can be linked to other switches in your network.
The 3064 will also do layer 3 on the 'base" license level (everything except BGP, full OSPF (OSPF limited to 256 routes at base level) and full EIGRP (stub only)), or full layer 3 routing at LAN Enterprise license level.
Unless you perceive a need to expand your 1/10 gig port requirements beyond the 96 fixed ports available across two 3064's, then I can't see a need for the 3750's at all.
Cheers.

Similar Messages

  • SAP NW7.1 with 10g and Netapp SAN on VM-Ware

    Hello together,
    I want to install several NW CE 7.1 Installations with an 10g Oracle DB on Windows Server 2003.
    Now, our admins talk about the Windows Hard Disk Drives like C: or D:
    Some admins want to implement different hard disk drives (like c: and d: etc.)  to the windows server 2003 to optimize the disk access, even if the server is connected to a netapp NAS. They say the the Windows 2003 server optimizes the disk access if more then one disk drive is configured.
    Some other solution architects says, that this is not necessary, because the C: partition an D: partition and e:partition is not on  a real hard disk, but on a SAN connection. It will be ok, if you define a c:partition with windows and a d:partition with oracle and sap on it, because the SAN is so fast that you must not seperate the disk load.
    How do you configure your Systems with oracle ?
    best rgards,
    Carsten Schulz

    Hi,
    I would do the mixing of Local Storage + SAN Storage for the same to have optimal file distributions.
    Along with the suggested partition layout by Eric (as well as by SAP and Oracle) , I would add the additional stripped partition on the local Hard-disk of the Box, which will be specially used for Paging activity (Virtual Memory).
    It is good to distribute all the suggested files (of different purpose) on different disks using RAID levels to deal with Data loss situation as well as to achieve higher I/O performance.
    Also the RAID levels play an important role here. Storage Partition with RAID-5 level is recommended for Storing the SAP Data files (Oracle Database files).  I am preferring RAID-0 level for OS executable and for SAP Executable (sapmnt) on local Hard-disks (or on SAN storage LUNs).
    *Origlogs and Mirrlogs files * are very sensitive & important files (online redo log files) which contains all committed+non-committed transaction entries and which are required to deal with Instance Recovery. So they are required to store at different locations to deal with any unexpected File loss/corruption situation whenever demanded in future.
    Along with the other files, the mirroring of Control files are also recommended on separate Disks location. A control file is very necessary file for the database to start and operate successfully.
    Separate location for storing Offline redo-log files (ORAARCH) is recommended to deal with Media recovery which is required in performing Complete/Point-in-time Database recovery.
    Regards,
    Bhavik G. Shroff

  • New SAN storage

    My environment PROD is Oracle 10g RAC on Windows 2003.
    •Oracle Cluster ready srvices software and database software are installed on local disk.
    •Oracle Database and archived log are on SAN storage
    Now i just want to use new SAN storage due to problem in older storage. How i can do,
    I

    Given what you've written I'm amazed you have anything working.
    You installed the CRS for what reason? RAC? ASM?
    Do you have a RAC cluster? Is it working?
    Do you want to build a RAC cluster?
    That something is a SAN confers almost no information of value. Is it copper or fibre? Is it EMC, Fujitsu, Hitachi, NetApp, Pillar, or someone else's? Does it have RAW devices that can be used to create ASM DiskGroups or a file system?
    Read the docs and repost addressing the missing information from this post.

  • SAN Storage solution in combination w/ Server 2012 R2 (HYPER-V)

    Hello everyone,
    We are currently in talks with 2 storage vendors (IBM and NETAPP) in order to replace/expand our current SAN storage and finally make the move to a virtualized environment (HYPER-V).
    For the NETAPP side, they proposed a FAS3220 for our main location and a FAS2240 for our DR location, both running Clustered ONTAP 8.2. For the IBM side, they proposed a couple of V3700's (under SVC).
    I'm aware this isn't the place to discuss these things in depth, but could anyone who has any experience w/ both solutions in combination w/ Server 2012 (R2) and HYPER-V recommend one solution over the other? Or is anyone in possession of a feature list
    for both solutions?
    Any help will be greatly appreciated.
    With Kind Regard,
    L.

    Hello everyone,
    We are currently in talks with 2 storage vendors (IBM and NETAPP) in order to replace/expand our current SAN storage and finally make the move to a virtualized environment (HYPER-V).
    For the NETAPP side, they proposed a FAS3220 for our main location and a FAS2240 for our DR location, both running Clustered ONTAP 8.2. For the IBM side, they proposed a couple of V3700's (under SVC).
    I'm aware this isn't the place to discuss these things in depth, but could anyone who has any experience w/ both solutions in combination w/ Server 2012 (R2) and HYPER-V recommend one solution over the other? Or is anyone in possession of a feature list
    for both solutions?
    Any help will be greatly appreciated.
    With Kind Regard,
    L.
    You should avoid NetApp when thinking about Hyper-V deployment. For a reason: NetApp "internals" are a pure filer built around WAFL and NFS being a direct interface to it. So for environments where you need to talk NFS (VMware ESXi and any *nixes with a good
    client NFS implementation) NetApp really shines. As Hyper-V has no support for NFS in terms of a VMs storage and pretty much never will, Windows having very low performing NFS support as a client (latency is ridiculous, 10x-100x times higher compared to FreeBSD
    running on the same hardware) and iSCSI with NetApp is a bolt-on solution (basically it's a file on a local WAFL partition and iSCSI uplink) you're not going to have the best performance and good features coverage. Up to the point when NetApp would do SMB
    3.0 in production for a couple of years. So picking up between IBM and NetApp in your case assuming these are only two options - go IBM. If you need a SAN for your environment and can think twice - think about virtual SAN. That's what big hypervisor boys are
    doing.
    StarWind iSCSI SAN & NAS

  • Oracle Rac on San Storage

    Hi
    We are in the midst of defining a process for  a 2 node installation on a san storage Device.  Oracle 11.2  on Rhel 5.
    What is the recommended implementation process for the shared storage.
    We would like the mirroring to stay at the storage level.
    * NFS without ASM
    * NFS with ASM
    * LUN with ASM
    * LUN without ASM
    Any other if any.
    I have gone through the documentation, it does give options, however does not say the best practices.
    Thanks in advance everyone and apologize if there is already a post on this and I'm repeating this.

    UDP is still relatively slow (due to Ethernet itself) compared to storage protocols over fibre channels and Infiniband (which can be bonded/multipath'ed too). HBA and HCA cards all have at least 2 ports.
    The basic issue is that Ethernet was never designed with storage protocol considerations. Or even Interconnect considerations. Case in point, Oracle developed the IB protocol RDS (Reliable Datagram Sockets) because UDP was not efficient and scalable enough. From Oracle performance reports to the OFED committee when ratifying RDS as a standard IB protocol, it was 2x as fast, consuming 50% less CPU, than UDP.
    Port to port latency for an Infiniband fabric layer can be as lows as 70ns (Wall Street IB implementations as a real world example). Most HPC centres using Infiniband has latencies of around 100ns.
    Ethernet is not the only solution. Despite what vendors like Cisco are keen to push "as the only solution so help us Donald Davies".
    So it is not as much a religious debate as which-is-the-best-o/s debates often turn into, as much as one of ignorance and Ethernet vendors exploiting and enforcing that.
    There are sound reasons Infiniband as Interconnect Family grew from a 4% market share 10 years ago, to over a 40% market share today, in the world of top 500 HPC clusters in the world. And Ethernet's market share has been decreasing.

  • ORACLE TABLESPACE CREATION PROBLEM IN SAN STORAGE FROM SOLARIS X86 Machine

    Hi everybody,
    i am new in this forum and i am also new in solaris operating system with SAN storage.
    I am using oracle database 10g in a sun x86 machine with solaris 5.10.
    While i am trying to create a tablepsace in SAN storage in a mount point of 2.1TB the following error occurs:
    SQL> create tablespace test datafile '/appbrm01/oracle/oradata/test1.dbf' size 100M;
    create tablespace test datafile '/appbrm01/oracle/oradata/test1.dbf' size 1M
    ERROR at line 1:
    ORA-01119: error in creating database file '/appbrm01/oracle/oradata/test1.dbf'
    ORA-27037: unable to obtain file status
    Intel SVR4 UNIX Error: 79: Value too large for defined data type
    Additional information: 45
    SQL>
    But when i am trying to create a tablepsace in SAN storage in a mount point of 1.8TB it works fine.
    So, please if any body can help me to solve this issue i will be very glad
    Thanks
    Reshad

    Also, Datapump from NAS device would perform slow.
    Take a look at this:
    10gR2 "database backup" to nfs mounted dir results in ORA-27054 NFS error
    Note:356508.1 - NFS Changes In 10gR2 Slow Performance (RMAN, Export, Datapump) From Metalink.

  • Oracle 10.2.0.4 on SAN Storage.

    Hi Experts,
    I am in the process of installing new Oracle 10g database for my
    production database which is a part of upgrade, on HP SAN storage with RAID5.
    We run oracle apps 11i and this is also going to upgrade to R12.
    My concern is how shall i put the different oracle database files on the filesystem with different mount points in order to get the maximum benefit. (like which file shoud be on same mount point, how many points should be there etc..)
    Kindly suggest my the best practice for this.
    Thanks in advance.

    Thanks,
    Do you mean that i can put oracle_home and database on a single mount point?
    Because if follow OFA everything is going to be on a single mount point.
    and for example in order to get controlfile and redolog redundant (as recommneded) i would put each copy of them on another mount point.
    But here does not i need it?
    Thanks.

  • Windows 2008 R2 Cluster - migrating data to new SAN storage - Detailed Steps?

    We have a project where some network storage is falling out of warranty and we need to move the data to new storage.  I have found separate guides for moving the quorum and replacing individual drives, but I can't find an all inclusive detailed step
    by step for this process, and I know this has to happen all the time.
    What I am looking for is detailed instructions on moving all the data from their current san storage to new san storage, start to finish.  All server side, there is a separate storage team that will present the new storage.  I'll then have to install
    the multi-pathing driver; then I am looking for a guide that picks up at this point.
    The real concern here is that this machine controls a critical production process and we don't have a proper set up with storage and everything to test with, so it's going to be a little nerve racking.
    Thanks in advance for any info.

    I would ask Hitachi.  As I said, the SAN vendors often have tools to assist in this sort of thing.  After all, in order to make the sale they often have to show how to move the data with the least amount of impact to the environment.  In fact,
    I would have thought that inclusion of this type of information would have been a requirement for the purchase of the SAN in the first place.
    Within the Microsoft environment, you are going to deal with generic tools.  And using generic tools the steps will be similar to what I said. 
    1. Attach storage array to cluster and configure storage.
    2. Use robocopy or Windows Server Backup for file transfer.  Use database backup/recovery for databases.
    3. Start with applications that can take multiple interruptions as you learn what works for your environment.
    Every environment is going to be different.  To get into much more detail would require an analysis of what you are using the cluster for (which you never state in either of your posts), what sort of outages you can operate with, what sort of recovery
    plan you will put in place, etc.  You have stated that your production environment is going to be your lab because you do not have a non-production environment in which to test it.  That makes it even trickier to try to offer any sort of detailed
    steps for unknown applications/sizing/timeframes/etc.
    Lastly, the absolute best practice would be to build a new Windows Server 2012 R2 cluster and migrate to a new cluster.  Windows Server 2008 R2 is already out of mainstream support. That means that you have only five more years of support on your
    current platform, at which point in time you will have to perform another massive upgrade.  Better to perform a single upgrade that gives you a lot more length of support.
    . : | : . : | : . tim

  • Oracle 10g RAC - two nodes - change SAN storage - only one node up

    Hi all,
    We're using Oracle 10g with Linux Red Hat.
    We want to migrate to another SAN storage.
    The basic steps are:
    node 1:
    1. presents the volume disk and create partition
    2. asmlib create disk newvolume
    3. alter diskgroup oradata add disk newdisk
    4. alter diskgroup oradata drop disk OLDDISKOTHERSAN
    But THE NODE 2 DOWN.
    we want start the NODE 2 only after ALL operations in node 1 finish.
    What's you opinion? Any impact?
    Can I execute the oracleasm scandisks on node 2 after?
    thank you very much!!!!

    modify the commands just slightly...
    1. presents the volume disk node1 AND node2
    2. create partition node1 [if necessary?? ]
    3. asmlib create disk newvolume [node1] ## I am not a fan of and have never had to use ASMlib - however, apparently you do...
    4. asmlib scandisk... [node2]
    [on one of the nodes]
    alter diskgroup oradata add disk1 newdisk rebalance power 0
    alter diskgroup oradata add disk2 newdisk rebalance power 0
    alter diskgroup oradata add disk3 newdisk rebalance power 0
    alter diskgroup oradata add disk{n} newdisk rebalance power 0
    alter diskgroup oradata drop disk1 OLDDISKOTHERSAN rebalance power 0
    alter diskgroup oradata drop disk2 OLDDISKOTHERSAN rebalance power 0
    alter diskgroup oradata drop disk3 OLDDISKOTHERSAN rebalance power 0
    alter diskgroup oradata drop disk{n} OLDDISKOTHERSAN rebalance power 0
    alter diskgroup oradata rebalance power {n}
    You can actually do this with BOTH nodes ONLINE. I was involved in moving 250TB from one storage vendor to another - all online - no downtime, no outage.
    And before you actually do this and break something - TEST it first!!!

  • New mac pro with fiber attached san storage

    We have a digital media department that uses After Effects to produce content for different types of media boards, digital signage, post production  etc. They start with Red 4K files which get edited in AE and rendered to different video formats. They want to move away from Windows PC based editors with multi drive RAID local storage to the new MAC Pro with 8GB fiber attached to a shared SAN storage solution. First question, is this a wise move to run After Effects? Second question is, do you need to create multiple logical volumes on the SAN storage to accommodate where AE render location, cache, source, database, and general media content files are stored or can this all be stored on different folders in one large 50TB volume? Third question is, should six different editors be sharing the location where these files are stored?

    Third question is, should six different editors be sharing the location where these files are stored?
    No. Each user should have his own home directory. Not for performance resons, but simply because they'll end up overwriting each other's files. The rest - logical volumes don't matter on your end, but SAN systems still use internal stripe groups, so it would make sense to define volumes on those stripe sets simply in the interest of performance. Beyond that AE doesn't really care. It uses the operating system's file I/O stuff and 'NIX based systems don't differentiate between local volumes or otehr resources as long as they are mounted correctly in teh file system.
    Mylenium

  • Need particular SAN storage document in English

    Hi,
    I have a SAN storage Solution document. The document is in German language. I am unable to find same document in English.
    I had attached SAN storage Solution document in German language below.
    Can anyone provide me same document in English language.
    I will be very thankful if anyone can help me.

    Thanks for the reply,
    Can your or anyone provide me similar solution document comprising of MDS and Storage router
    That will be very helpful for me.

  • New Storage disks added to san

    Hey all,
    I have added a set of new storage disks to our san that is connected to our Netware boxes. The existing storage appears fine in remote manager. I have run scan all and scan for new hardware, but i can't get the new storage to appear under either of the qlogic cards.
    Any suggestions as to what I need to do to get them to appear?

    More of my setup..
    I'm running NW 6.5 Sp5 with 2 qlogic ql2300 cards. I have a 1 TB drive presented to the server, I think it's done right? The SAN is a IBM DS4300. It's set to LUN1.
    Between the SAN and the NW server is a fibre switch, an IBM Total Storage 16M2. No changes were made to the switch.
    There is an existing array set to LUN0 and that is working fine to the NW server.
    I have run SCAN ALL on the NW server to scan all LUNs. I then go into NSSMU and do a scan for new devices. The new array is not shown, just the original array on LUN0 and the internal drive.
    Any ideas as to what I can do to get the new storage array to be shown in NSSMU? Any help would be greatly appreciated!

  • Help with Oracle 10g Client Connectivity from Linux to IBM SAN storage

    Hello Oracle Experts,
    This is my first post. My client is having oracle 10g database up and running in IBM SAN storage.
    We have some NMS tools running in Red Hat Linux version 5. So, these tools require connectivity to Oracle database which is residing in SAN storage connected with the Fibre cables.
    How do I establish the connectivity from Linux to SAN storage. If would be glad if you can explain me the steps and also if there is any pre-installation/post-installation, patches and procedures involved.
    If it is IP based network we normally give the IP address of the host running the database server. I have no idea about SAN storage connected with Fibre cable.
    Please guide me to establish the connectivity from linux 5 to SAN.
    Thanks.
    Regards,
    RaviShankar.

    user13153556 wrote:
    Hi Rajesh,
    Actually I will not be touching the Oracle instance SAN box directly. I will only access the database from another machine. I my case it is Linux box.
    So, my question is how do you make the Oracle Client in Linux box to connect to Oracle instance running in another non ip based machine SAN storage.Install Oracle client on this Linux machine ..
    Make sure you have network connectivity from linux machine to database server. You need to connect to server where db instance is running and you need not to bother about SAN storage.
    make tns entry into client $ORACLE_HOME/network/admin/tnsnames.ora file.
    Use sqlplus to connect to database using client.
    Regards
    Rajesh

  • Unable to connect a Netapp San to a Cisco Catalyst in 10Gb

    I have a Cisco Catalyst 4507R with a WS-X4606-X2-E and X2-10GB-SR to be connected to a Netapp SAN and it does not work.
    With show interface transceiver detail it seems that the power sent by netapp is good but the link stay in dwon down state
    Any help?

    down/down is related to layer 1 issue. Make sure the SFP is correct on both ends (and same type: LR vs SR) and also make sure you are using the right fiber. You can loop back on the same catalyst both sfps with the cable to see if the link comes up.

  • Connect two domain controllers to SAN storage

    Hi everyone
    I have primary and secondary domain controllers, I want to connect them to SAN storage as a cluster, I tried to configure Failover Clustering on them, but when adding them both to the Create A Cluster Wizard, I receive the following error (see the link)
    http://s14.postimg.org/lssjm2vu9/Screenshot_1.png
    so, is there any solution for this error, or may be there is another way to connect both DCs to the storage as cluster.
    any help will be appreciated,

    Hi,
    as I know this configuration is not supported.
    http://support.microsoft.com/kb/2795523/en-us
    Regards
    Guido

Maybe you are looking for

  • I bought the wrong chip.

    This is more of a hypothetical question.  I ask because I am curious. My Mid-2010 Mac Mini worked cheerfully on 2GB of RAM until I got to Yosemite.  Since then the computer has been getting slower and slower.  When my HP LaserJet P1102w refused to pr

  • Can no longer Preview in Captivate 6

    Yesterday, everything worked fine in Captivate 6.  Today, I can no longer Preview slides or Publish. When I try to Preview, the Generating Slides progress bar makes it to about 65% and then disappears.  Nothing else happens after that.  When I try to

  • Profit Center Redetermination

    Just wanted to know from you if you have aby idea about this or if you can make suggestions on this. This is relating to the Bank Acceptance Draft (B.A.D.) in China, For example, company receive the BAD from our customer on 2008-3-31, but the BAD due

  • OBIEE 11g opmnctl services cannot start (ping failed in ready callback)

    Hi, The BI server is up and running The platform is Linux, searching forums it's due to /etc/hosts and it is defined as 127.0.0.1 localhost.localdomain localhost 127.0.0.1 <servername>.<domain> <servername> All OPMN services cannot start with error m

  • Ipod Camera Connector Issues. Photos not showing up in disk mode

    I have a 60gig black iPod video and I successfully uploaded pictures via the photo connector from my Rebel XT. The problem is, the first ten rolls show up fine, both in viewing on the iPod as well as in disk mode. Rolls 11-16 show up via the import p