NetApp and Hitachi SAN

I have a Solaris 9 server attached to a SAN fabric using two Silkworms 3850 with two emulex LP9002L-F2 cards. Presently there is a Hitachi 9500V storage presenting LUNs to the Sol9 server without any problems. We aquired two Netapp Filers 3820c that are attached to the SAN fabric. The server uses Veritas FS 4.1 and vxdmp for failover.
My question is:
Do I need separate HBA cards for each storage? one set for hitachi and one set for NetApp ? Or just the existing HBAs will do as long as the NetApp presents luns to it ?The NetApp techie says you can not access more than one storage at a time. Is either All hitachi or all SAN.
Please shed some light!.
Thanks in advanced.

There is what works.
There is what is supported.
It probably works to present the NetApp LUNs to the same HBAs that are accessing the SAN. However, if NetApp says it isn't supported .... then you need to ask them why.

Similar Messages

  • How do I map Hitachi SAN LUNs to Solaris 10 and Oracle 10g ASM?

    Hi all,
    I am working on an Oracle 10g RAC and ASM installation with Sun E6900 servers attached to a Hitachi SAN for shared storage with Sun Solaris 10 as the server OS. We are using Oracle 10g Release 2 (10.2.0.3) RAC clusterware
    for the clustering software and raw devices for shared storage and Veritas VxFs 4.1 filesystem.
    My question is this:
    How do I map the raw devices and LUNs on the Hitachi SAN to Solaris 10 OS and Oracle 10g RAC ASM?
    I am aware that with an Oracle 10g RAC and ASM instance, one needs to configure the ASM instance initialization parameter file to set the asm_diskstring setting to recognize the LUNs that are presented to the host.
    I know that Sun Solaris 10 uses /dev/rdsk/CwTxDySz naming convention at the OS level for disks. However, how would I map this to Oracle 10g ASM settings?
    I cannot find this critical piece of information ANYWHERE!!!!
    Thanks for your help!

    Yes that is correct however due to use of Solaris 10 MPxIO multipathing software that we are using with the Hitachi SAN it does present an extra layer of complexity and issues with ASM configuration. Which means that ASM may get confused when it attempts to find the new LUNs from the Hitachi SAN at the Solaris OS level. Oracle Metalink note 396015.1 states this issue.
    So my question is this: how to configure the ASM instance initialization parameter asm_diskstring to recognize the new Hitachi LUNs presented to the Solaris 10 host?
    Lets say that I have the following new LUNs:
    /dev/rdsk/c7t1d1s6
    /dev/rdsk/c7t1d2s6
    /dev/rdsk/c7t1d3s6
    /dev/rdsk/c7t1d4s6
    Would I set the ASM initialization parameter for asm_diskstring to /dev/rdsk/c7t1d*s6
    as correct setting so that the ASM instance recognizes my new Hitachi LUNs? Solaris needs to map these LUNs using pseudo devices in the Solaris OS for ASM to recognize the new disks.
    How would I set this up in Solaris 10 with Sun multipathing (MPxIO) and Oracle 10g RAC ASM?
    I want to get this right to avoid the dreaded ORA-15072 errors when creating a diskgroup with external redundancy for the Oracle 10g RAC ASM installation process.

  • Solaris 10 and Hitachi LUN mapping with Oracle 10g RAC and ASM?

    Hi all,
    I am working on an Oracle 10g RAC and ASM installation with Sun E6900 servers attached to a Hitachi SAN for shared storage with Sun Solaris 10 as the server OS. We are using Oracle 10g Release 2 (10.2.0.3) RAC clusterware
    for the clustering software and raw devices for shared storage and Veritas VxFs 4.1 filesystem.
    My question is this:
    How do I map the raw devices and LUNs on the Hitachi SAN to Solaris 10 OS and Oracle 10g RAC ASM?
    I am aware that with an Oracle 10g RAC and ASM instance, one needs to configure the ASM instance initialization parameter file to set the asm_diskstring setting to recognize the LUNs that are presented to the host.
    I know that Sun Solaris 10 uses /dev/rdsk/CwTxDySz naming convention at the OS level for disks. However, how would I map this to Oracle 10g ASM settings?
    I cannot find this critical piece of information ANYWHERE!!!!
    Thanks for your help!

    You don't seem to state categorically that you are using Solaris Cluster, so I'll assume it since this is mainly a forum about Solaris Cluster (and IMHO, Solaris Cluster with Clusterware is better than Clusterware on its own).
    Clusterware has to see the same device names from all cluster nodes. This is why Solaris Cluster (SC) is a positive benefit over Clusterware because SC provides an automatically managed, consistent name space. Clusterware on its own forces you to manage either the symbolic links (or worse mknods) to create a consistent namespace!
    So, given the SC consistent namespace you simple add the raw devices into the ASM configuration, i.e. /dev/did/rdsk/dXsY. If you are using Solaris Volume Manager, you would use /dev/md/<setname>/rdsk/dXXX and if you were using CVM/VxVM you would use /dev/vx/rdsk/<dg_name>/<dev_name>.
    Of course, if you genuinely are using Clusterware on its own, then you have somewhat of a management issue! ... time to think about installing SC?
    Tim
    ---

  • Connecting a 3500 through a Brocade Switch to Hitachi SAN

    I have a E3500 with a x6730A fiber card attached to a Brocade Switch and that switch to a Hitachi SAN. I wanted to find out where I need to start? I can see that E3500 recognizes the HBA card, but unsure as to where to go from here. Do I need to get Veritas FS or can I create volumes through Solstice? Any advice would be appreciated. I have all 3 components configured separatly but have never connected any of them together.
    # ls -la /dev/cfg
    total 18
    drwxr-xr-x 2 root root 512 Mar 1 12:10 .
    drwxr-xr-x 14 root sys 3072 Mar 9 13:52 ..
    lrwxrwxrwx 1 root root 51 Mar 1 12:10 c0 -> ../../devices/sbus&#64;2,0/SUNW,socal&#64;d,10000/sf&#64;0,0:fc
    lrwxrwxrwx 1 root root 51 Mar 1 12:10 c1 -> ../../devices/sbus&#64;2,0/SUNW,socal&#64;d,10000/sf&#64;1,0:fc
    lrwxrwxrwx 1 root root 46 Mar 1 12:10 c2 -> ../../devices/sbus&#64;3,0/SUNW,fas&#64;3,8800000:scsi
    lrwxrwxrwx 1 root root 47 Mar 1 12:10 c3 -> ../../devices/sbus&#64;3,0/SUNW,socal&#64;0,0/sf&#64;0,0:fc
    lrwxrwxrwx 1 root root 47 Mar 1 12:10 c4 -> ../../devices/sbus&#64;3,0/SUNW,socal&#64;0,0/sf&#64;1,0:fc
    c3 and c4 are the addresses of the HBA card

    I am currently running Solaris 8 and will be using a Hitachi SAN to put an Oracle 9 database on it. Currently the Brocade switch can see our Hitachi SAN, but we are stuck on how to connect E3500 through x6730A to the Brocade.
    SunOS morrison 5.8 Generic_117350-23 sun4u sparc SUNW,Ultra-Enterprise
    # luxadm probe
    No Network Array enclosures found in /dev/es
    Found Fibre Channel device(s):
    Node WWN:20000020378f9154 Device Type:Disk device
    Logical Path:/dev/rdsk/c0t0d0s2
    Node WWN:20000020379c5e10 Device Type:Disk device
    Logical Path:/dev/rdsk/c1t4d0s2
    Node WWN:20000020375cca44 Device Type:Disk device
    Logical Path:/dev/rdsk/c1t5d0s2
    Node WWN:20000020371ae3ef Device Type:Disk device
    Logical Path:/dev/rdsk/c0t1d0s2
    # format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
    /sbus&#64;2,0/SUNW,socal&#64;d,10000/sf&#64;0,0/ssd&#64;w21000020378f9154,0
    1. c0t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /u04
    /sbus&#64;2,0/SUNW,socal&#64;d,10000/sf&#64;0,0/ssd&#64;w21000020371ae3ef,0
    2. c1t4d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /u02
    /sbus&#64;2,0/SUNW,socal&#64;d,10000/sf&#64;1,0/ssd&#64;w21000020379c5e10,0
    3. c1t5d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /u03
    /sbus&#64;2,0/SUNW,socal&#64;d,10000/sf&#64;1,0/ssd&#64;w21000020375cca44,0

  • Dedupe on NetApp and disk reclaim on VMware. Odd results

    Hi I am currently in the process of reclaiming disk space back from our NetApp FAS8020 Array running 7-mode 8.2.1. All of our flexvols are VMware datastores using VMFS which are all thin provisioned volumes. NONE of our datastores are presented using NFS.  On the VMware layer we have a mixture of VMs using thin and thick provisioned disk, any new VMs created are normally creating using thin provisioned disks.  Our VMware environment is ESXi 5.0.0 U3 and we also use VSC 4.2.2. This has been quite a journey for us and after a number of hurdles we are now able to see reclaim of volume space on the NetApp, this resulting in the free space returning to the aggregate. To get this all working we had to perform a few steps provided by NetApp and VMware. If we used NFS we could have used the disk reclaim feature in VSC but because that only works with NFS volumes this wasn't an option for us. NETAPP - Enable lun set space_alloc to enabled - https://kb.netapp.com/support/index?page=content&id=3013572. This is disabled by default on any version of ONTAP.VMWARE - Enable BlockDelete to value 1 on each ESXi host in cluster - http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007427. This is disabled by default on the version of ESXi we are running.VMWARE - Rescan the VMFS datastores in VMware and update the VSC settings for each host. Set recommended host settings. Once performed check delete status is showing as 'supported' esxcli storage core device vaai status get -d naa - http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2014849VMWARE - login to ESXi host, go the /vmfs/volumes and datastore where you want to run disk reclaim and run vmkfstools -y percentage_of_deleted_blocks_to_reclaimNETAPP Run sis start -s -d -o /vol/lun - this will rerun deduplication and delete the existing checkpoints and start afresh. Whilst I believe we are seeing savings on the volumes we are not seeing the savings at the LUN layer in NetApp. The volume usage comes down and with dedupe on I would expect the volume usage to be lower than the datastore usage but the LUN usage doesnt go down.   Does anyone know why this might be the case. Both our flexvols and LUNs are created using thin provisoned and space reserved in unchecked on the LUN.

    Hi,
    Simple answer is yes. It's just the matter of visibility of the disks on the virtual servers. You need to configure the disks appropriately so that some of them are accessible from both nodes e.g. OCR or Voting disks and some are local, but many of the answers depend on the setup that you are going to choose.
    Regards,
    Jarek

  • ZFS with Hitachi SAN Questions

    We are moving all our oracle and zone storage to ZFS using an external Hitachi VSP (with Dynamic Provisioning).
    Is there a best practice for LUN sizes..? Since I can create a LUN of any using with HDP, what is the best approach:
    a) If I need a 1TB zpool, do I allocate a 1TB sized LUN or 10 x 100GB LUNs for example. I don't need to do any mirroring or anything like that. Underneath the hood these LUNs are sitting on the same RAID group. (I guess I could configure the LUNs so they were on different raid groups but this is a level of performance far beyond what I will need for this cluster)
    If we are deploying oracle in zone clusters, does it make sense to create different zpools for the zonepath itself, and the /u01, /u02, etc... mounts the database will be using? OR Just create one large zpool that will be the zonepath and then the DBA's can create there /u01, /u02, etc... directories at the root level of the zone?
    If using ZFS mirroring to migrate from old storage onto new storage feasible?
    ie: I have some solaris 10 u5 stand-alone systems running Oracle using LUNs from a USP-V. The USP-V is being replaced with a VSP and the M5000's are being replaced with new ones (which will be a solaris cluster running solaris 10 update 9) Can I present new LUNs from the VSP to the old systems, configure a mirror for the existing zpools, and then break that mirror, ultimately ending up on the new storage? (and eventually importing these onto the new nodes) The issues I see are a) not sure if the zpool/zfs version of update 5 will pose any problems once imported on the update 9 systems; b) can a mirror be added to a pre-existing zpool that wasn't originally configured that way (I'm sure it can but haven't actually done this)
    Thanks for any info

    There is what works.
    There is what is supported.
    It probably works to present the NetApp LUNs to the same HBAs that are accessing the SAN. However, if NetApp says it isn't supported .... then you need to ask them why.

  • Why SharePoint 2013 Hybrid need SAN certificates and what SAN needs ?

    I've read this article of technet, but I couldn't undarstand requied values of SubjectAltname.
    https://technet.microsoft.com/en-us/library/b291ea58-cfda-48ec-92d7-5180cb7e9469(v=office.15)#AboutSecureChannel
    For example, if I build following servers, what SAN needs ?
    It is happy to also tell me why.
    [ServerNames]
     AD DS Server:DS01
     AD FS Server:FS01
     Web Application Proxy Server:PRX01
     SharePoint Server(WFE):WFE01
     SharePoint Server(APL):APL01
     SQL Server:DB01
    [AD DS Domain Name]
     contoso.local
     (Please be assumed that above all servers join this domain)
    [Site collection strategy]
     using a host-named site collection
    [Primary web application URL]
     https://sps.contoso.com
    Thanks.

    Hi,
    From your description, my understanding is that you have some doubts about SAN.
    If you have a SAN, you can leverage it to make SharePoint
    a little easier to manage and to tweak SharePoint's performance. From a management standpoint, SANs make it easy to adjust the size and number of SharePoint's hard disks. What you could refer to this blog:
    http://windowsitpro.com/sharepoint/best-practices-implementing-sharepoint-san. You could find what SAN needs from part “Some
    SAN Basics” in this blog.
    These articles may help you understand SAN:
    https://social.technet.microsoft.com/Forums/office/en-US/ea4791f6-7ec6-4625-a685-53570ea7c126/moving-sharepoint-2010-database-files-to-san-storage?forum=sharepointadminprevious
    http://blogs.technet.com/b/saantil/archive/2013/02/12/san-certificates-and-sharepoint.aspx
    http://sp-vinod.blogspot.com/2013/03/using-wildcard-certificate-for.html
    Best Regard
    Vincent Han
    TechNet Community Support

  • NETApp and Cisco 5000s

    I am in the designing stages of a data center where we want to use NEXUS 5000s, NETApps 3140s and NEXUS 2232s for FC0E. From my understanding these devices inter operate like so:
    Servers will have converged NICS that use normal CAT 6 to connect to the 2332s
    The cabling between the 5ks and the 2ks will be FET (fabric extender transciever)
    The cabling between the 7ks and the 5ks is TWINAX
    So my question would be how does the NETAPP interoperate with the 5Ks? What is the cabling? Someone asked me if I was going to use a storage access layer device wi the NETAPPs. Is that necessary? Recommended?
    Thanks for any help or advice you can give,
    P.

    1) The NetApp Target CNA's support attachment via SFP+ and fiber optics or via twinax cable.
    Note that this depends on which part you've got installed :
    X1139A-R6 - fiber (QLE8142)
    X1140A-R6 - copper (QLE8152)
    2) For interoperability and some configuration examples please refer to the following documents.
    NetApp Interoperability Matrix Tool (IMT)
    https://now.netapp.com/NOW/products/interoperability/
    Fibre Channel over Ethernet (FCoE) End-to-End Deployment Guide
    http://media.netapp.com/documents/TR-3800.pdf

  • USB3 Speeds mac mini 2013 and Hitachi Tuoro 4TB

    I am new to USB3 and have always used FW800 before , I just got a new Mac Mini and an Hitachi Tuoro 4TB I am finding the read/Write Speed with Backmagic Disk Speed Test are really slow.! my old FW800 drive is faster .
    anyone have any idea on this?
    Write 64 Read 64       Hitachi Tuoro 4TB
    Write 81 Read 83       Iomega 1TB FW800 (bus powered)

    FireWire is more efficient than USB so even though USB3 is theoretically much faster than FireWire 800 a lot of that advantage is lost. However I would not have expected the results you see.
    I would also normally have expected a 4TB drive to be faster than a 1TB drive being that a 4TB is likely to use newer faster technology.
    You could check the formatting of the 4TB drive, for use on a Mac it should be partitioned using GUID and formatted using HFS+ Journalled. If it is MBR and FAT32 this will not help.
    Also, if you have USB2 devices plugged in as well and this could include the mouse and keyboard they maybe slowing down the USB3 device. If you connect them to different sockets on the Mac mini then this should prevent this.
    You might want to read http://www.macworld.com/article/2039427/how-fast-is-usb-3-0-really-.html
    Also http://support.apple.com/kb/ht5172?viewlocale=de_de

  • Oracle VM and multiple SAN LUNS

    Hi,
    now i have got another question.
    On my OVM Cluster with 2 Servers and SAN LUNS.
    I have created my root repository with ocfs2.
    Ist is mounted to /OVS/xxx
    Now i like to have some more SAN LUNS connetd to my both servers.
    What is the best practice? Wher should i mount them (and how? fstab?)
    And how do i locate the Guests on it? (cant find where to choose the repo in the OVS Gui?)
    Thanks
    Here are some more infos:
    /opt/ovs-agent-2.3/utils/repos.py -l
    [   ] c7b7119e-756b-464d-87f4-8ce1ef760adf => /dev/mapper/mpath2
    [ * ] a9c14f22-2f8d-4ea0-b40c-2513831ee119 => /dev/dm-0
    mount
    ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
    /dev/dm-0 on /var/ovs/mount/A9C14F222F8D4EA0B40C2513831EE119 type ocfs2 (rw,_netdev,heartbeat=local)
    /dev/mapper/mpath2 on /var/ovs/mount/A9C14F222F8D4EA0B40C2513831EE119/lun-sata02 type ocfs2 (rw,_netdev,heartbeat=local) >>>> is that a good idea? i did mount manually!

    ssolbach wrote:
    you need to initialize the repository.
    And after that it should get mounted (not under OVS though... the path was something like /opt/ovs-repositories/mount/ID or something). Sorry can't check atm.It gets mounted in /var/ovs/mount/UUID

  • Using a Netapp and NFS

    I did some searches and in the past it sounded like the use of NFS wasn't recommended. Has this stance changed in the latest releases of the Sun Messaging Server?
    The reason I ask is because I currently rely on NFS with our current email system to distribute our load to a number of frontend mail servers. We have around 130,000 accounts and recieve anywhere from 1 to 2 million messages per day.
    Thanks

    While we do not normally recommend the use of Network Attached Storage, via NFS for mail store use, there are specific products that have been tested, and found to work.
    Network Appliance is one vendor we have tested.
    We do continue to find problems, however, and we are working on those. Deleting a mail folder currently causes problem.
    Please note:
    1. Network attached storage WILL impact your performance. It fights for bandwidth with incoming/outbound mail.
    2. This will ONLY be supported with current JES2005q4 product. It will NOT be backported to 5.2.
    Personally, I would not touch NFS for mail stores with a 10 foot pole. SAN is a totally different thing, though, and fully supported for all versions. It's not NFS.

  • Magnification and scala sans in pdf file

    A pdf file we created has a curious font (or magnification) problem. All text characters look fine when the pdf is viewed at 100%, but when viewed at 75%, certain numbers (the eights and zeros)  appear to shrink to about three-quarter size, relative to the surrounding characters. The file prints fine. Our client is concerned about this, and we are at a loss. The font used is Scala Sans, postscript. Has anyone seen this before?

    "Relative to the other characters" would be the important bit. Of course you'd expect things to be three-quarter size at 75%. Oh, man.

  • FCoE options for Cisco UCS and Compellent SAN

    Hi,
    We have a Dell Compellent SAN storage with iSCSI and FCoE module in pre-production environment.
    It is connected to new Cisco UCS infrastructure (5108 Chassis with 2208IOM + B200 M2 Blades + 6248 Fabric Interconnect) via 10G iSCSI module (FCoE module isn't being used at th is moment).
    I reviewed compatibility matrix on interconnect but Compellent (Dell) SAN is only supported on FI NXOS 1.3(1), 1.4(1) without using 6248 and 2208 IOM which is what we have. I'm sure some of you have similar hardware configuration as ours and I'd like to see if there's any supportive Cisco FC/FCoE deployment option for the Compellent. We're pretty tight on budget at this moment so purchasing couple of Nexus 5K switches or something equipvalent for such a small number of chassis (only only have one) is not a preferred option. If additional hardware acquisition is inevitable, what would be the most cost effective solution to be able to support FCoE implementation?
    Thank you in advance for your help on this.

    Unfortunatly there isn't really one - with direct attach storage there is still the requirement that an upstream MDS/N5k pushes the zoning to it.  Without a MDS to push the zoning the system it's recommended for production.
    http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.0/b_UCSM_GUI_Configuration_Guide_2_0_chapter_0101.html#concept_05717B723C2746E1A9F6AB3A3FFA2C72
    Even if you had a MDS/N5K the 6248/2208's wouldn't support the Compellent SAN - see note 9.
    http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperability/matrix/Matrix8.pdf
    That's not to say that it won't work, it's just that we haven't tested it and don't know what it will do and thus TAC cannot troubleshoot SAN errors on the UCS.
    On the plus side iSCSI if setup correctly can be very solid and can give you a great amount of throughput - just make sure to configure the QoS correctly and if you need more throughput then just add some additional links

  • Mavericks and Hitachi GRAID

    Hello. I've a Macbook pro 17-inch, Late 2011. I used to have installed Lion, but since last friday I installed Mavericks. And I been having problems using the Hitachi GRAID external hard drive. I haven't had any problems with others external hard drives.

    Hello rockandrollfilms
    See if it shows up in Disk Utility on your Mac and verify the disk to ensure that there are no errors on the hard drive.
    Using Disk Utility to verify or repair disks
    http://support.apple.com/kb/ht1782
    Regards,
    -Norm G.

  • NetApp and ASM

    Hi,
    I am setting up a new 11g database with NetApp storage and I am not sure if I should be using ASM for the datafiles and logs. Netapp best practice guide suggests only using ASM for cluster files, but it does seem that Oracle wants us to use ASM for datafiles.
    Does anyone have a suggestion?
    Thanks
    Randy

    ASM, RAW, cooked, NFS .. They all work just fine. If you are going to use ASM and want to take advantage of some cool NetApp features then you will want to setup a couple DG's. Purely for functionality (backup,recovery,clone or DR) This holds true for RAW, cooked or NFS etc.. as well..
    As far as the previous post, NetApp is not RAID 5, nor uses copy on write snapshots nor encounters perf hits @ 80% full etc...
    But I do agree with the statement with NetApp you do not necessarily need to use ASM for single aggr design as NetApp can handle I/O balance etc...
    There all right, just which one works best for your situation
    Mike

Maybe you are looking for