NetApp vs HP EVA

I am looking for comments about both of these storage vendor for virtualization with VMware.    Both of these companies are key players in the storage market.    We plan to virtualize everything we can oracle, exchange, file services and various single server applications.     I am interested especially in failover from storage to storage (two data centers) and how/if each vendor handles the network transfer when this happens.     Looking for other universities that may have already done what we plan to do for comments on both of these storage technologies.   
Thanks in advanced.

You can also have a look at StarWind 5.0 - the latest software iSCSI target with HA feature.
StarWind 5.0

Similar Messages

  • SAP Data Storage Migration from HP EVA SAN to NetApp FAS3070 FMC for M5000s

    Good day all
    We are needing to perform a storage migration for SAP Data that is currently on 2 HP EVA SANs. We have 2 SUN M5000s, 2 SUN E2900s and a couple of V490s, that all connect to the SAN via Cisco 9506 Directors. We have recently commissioned a NetApp Fabric Metrocluster on 2 FAS 3070s, and need to move our SAP Data from the EVAs to the new Metrocluster. Our SUN boxes are running Solaris 10. It was suggested that we use LVM to move the data, but I have no knowledge when it comes to Solaris.
    I have some questions, which I hope someone can assist me in answering:
    - Can we perform a live transfer of this data with low risk, using LVM? (Non-disruptive migration of 11Tb)
    - Is LVM a wise choice for this task? We have Replicator X too, but have had challenges using it on another Metrocluster.
    - I would like to migrate our Sandbox, as a test migration (1.5Tb), and to judge the speed of the data migration. Then move all DEV and QA boxes across, before Production data. There are multiple zones on the hardware mentioned above. Is there no simple way of cloning data from the HP to the NetApp, and then re-synching before going live on the new system?
    - Would it be best to have LUNs created with the same volume on the new SAN as the HP EVA sizings, or is it equally simple to create "Best Practise" sized LUNs on the other side before copying data across? Hard to believe it would be equally simple, but we would like to size the LUNs properly.
    Please assist, I can get further answers, if there are any questions in this regard.

    Good day all
    We are needing to perform a storage migration for SAP Data that is currently on 2 HP EVA SANs. We have 2 SUN M5000s, 2 SUN E2900s and a couple of V490s, that all connect to the SAN via Cisco 9506 Directors. We have recently commissioned a NetApp Fabric Metrocluster on 2 FAS 3070s, and need to move our SAP Data from the EVAs to the new Metrocluster. Our SUN boxes are running Solaris 10. It was suggested that we use LVM to move the data, but I have no knowledge when it comes to Solaris.
    I have some questions, which I hope someone can assist me in answering:
    - Can we perform a live transfer of this data with low risk, using LVM? (Non-disruptive migration of 11Tb)
    - Is LVM a wise choice for this task? We have Replicator X too, but have had challenges using it on another Metrocluster.
    - I would like to migrate our Sandbox, as a test migration (1.5Tb), and to judge the speed of the data migration. Then move all DEV and QA boxes across, before Production data. There are multiple zones on the hardware mentioned above. Is there no simple way of cloning data from the HP to the NetApp, and then re-synching before going live on the new system?
    - Would it be best to have LUNs created with the same volume on the new SAN as the HP EVA sizings, or is it equally simple to create "Best Practise" sized LUNs on the other side before copying data across? Hard to believe it would be equally simple, but we would like to size the LUNs properly.
    Please assist, I can get further answers, if there are any questions in this regard.

  • "Best" Allocation Unit Size (AU_SIZE) for ASM diskgroups when using NetApp

    We're building a new non-RAC 11.2.0.3 system on x86-64 RHEL 5.7 with ASM diskgroups stored on a NetApp device (don't know the model # since we are not storage admins but can get it if that would be helpful). The system is not a data warehouse--more of a hybrid than pure OLTP or OLAP.
    In Oracle® Database Storage Administrator's Guide 11g Release 2 (11.2) E10500-02, Oracle recommends using allocation unit (AU) size of 4MB (vs. a default of 1MB) for a disk group be set to 4 MB to enhance performance. However, to take advantage of the au_size benefits, it also says the operating system (OS) I/O size should be set "to the largest possible size."
    http://docs.oracle.com/cd/E16338_01/server.112/e10500/asmdiskgrps.htm
    Since we're using NetApp as the underlying storage, what should we ask our storage and sysadmins (we don't manage the physical storage or the OS) to do:
    * What do they need to confirm and/or set regarding I/O on the Linux side
    * What do they need to confirm and/or set regarding I/O on the NetApp side?
    On some other 11.2.0.2 systems that use ASM diskgroups, I checked v$asm_diskgroup and see we're currently using a 1MB Allocation Unit Size. The diskgroups are on an HP EVA SAN. I don't recall, when creating the diskgroups via asmca, if we were even given an option to change the AU size. We're inclined to go with Oracle's recommendation of 4MB. But we're concerned there may be a mismatch on the OS side (either Redhat or the NetApp device's OS). Would rather "first do no harm" and stick with the default of 1MB before going with 4MB and not knowing the consequences. Also, when we create diskgroups we set Redundancy to External--because we'd like the NetApp device to handle this. Don't know if that matters regarding AU Size.
    Hope this makes sense. Please let me know if there is any other info I can provide.

    Thanks Dan. I suspected as much due to the absence of info out there on this particular topic. I hear you on the comparsion with deviating from a tried-and-true standard 8K Oracle block size. Probably not worth the hassle. I don't know of any particular justification with this system to bump up the AU size--especially if this is an esoteric and little-used technique. The only justification is official Oracle documentation suggesting the value change. Since it seems you can't change an ASM Diskgroup's AU size once you create it, and since we won't have time to benchmark using different AU sizes, I would prefer to err on the side of caution--e.g. first do no harm.
    Does anyone out there use something larger than a 1MB AU size? If so, why? And did you benchmark between the standard size and the size you chose? What performance results did you observe?

  • ORA-27086: unable to lock file over NFS -- but it's NOT Netapp!

    My 10.2 database crashed, and when it came back I got the following error:
    ORA-00202: control file: '/local/opt/oracle/product/10.2.0/dbs/lkFOOBAR'
    ORA-27086: unable to lock file - already in use
    Linux-x86_64 Error: 11: Resource temporarily unavailable
    This is a classic symptom of a Netapp problem, which likes to hold file locks open on NFS mounts. There is a standard procedure for clearing those locks; see, for instance, document 429912.1 on Metalink.
    Unfortunately, my files are mounted on an Isilon, one of Netapp's twisted cousins. I can find no references to "isilon" on Metalink, and we are at a loss how to resolve this.
    My sysadmin assures me that "there are no locks on the Isilon". But I know this cannot be the case, because if I do the following:
    1. delete the lockfile /local/opt/oracle/product/10.2.0/dbs/lkFOOBAR, and then
    2. move my controlfiles aside, and then copy them back into place,
    then the database will mount. However, it will not open, because now all the datafiles have locks.
    Is there anyone with experience in clearing NFS locks? I know this is more of a SA task than DBA, but I am sure my SA has overlooked something.
    Thanks

    New information:
    As stated above, I moved the controlfiles aside and then copied them back into place, like this:
    mv control01.ctl control01-bak.ctl
    cp control01-bak.ctl control01.ctlDid that for each controlfile, and then the database mounted.
    But, after rebooting the machine, we discovered that all locks were back in place-- it looks like the system is locking the files on boot-up, and not letting them go. The lock is held by PID 1, which is init.
    sculkget: lock held by PID: 1This is definitely looking like a major system issue, and not a DBA issue, and hence I have little right to expect assistance in this forum. But nonetheless I lay my situation out here before you all in case someone else recognizes my problem before the server bursts into flames!
    The system is CentOS 4.5-- not my choice, but that's the way it is.

  • EVA 4400 disk performanc​e

    Hi
    We're running HP blade system on Hyper-V 2008 R2 cluster with EVA 4400 storage.
    EVA has two disk groups
    - diskgroup 0 (enclosuer 0) with 10 FC disks 400GB 10k
    - diskgroup 1 (enclosure 1) with 8 FC disks 300GB 15k
    We're dealing with a high impact on disk I/O performance on SQL servers running on Hyper-V hosts on VM (Win2008 + SQL 2008). 
    R/W performance is between 20-30MB/s, but only on servers running SQL. On all other VM servers without SQL performance is between 50-60MB/s.
    We have already consulted with Microsoft support, they didn't find any issues with cluster or VM, also there is no other HW impact on performance. 
    The only conclusion by our appinion is that SQL servers are generating so many I/O requests, that EVA or HBA on hosts can't handle that.
    Is this true, or how can we check that ? SQL profiler show normal activity, also there is no other issues with operating sistem, so we're certain that there must be some kidn of connection between VM - HOST - EVA.
    Also running RW test directly on a host connected to EVA, Read results are between 200-300MB/s, but Write is between 60-100MB/s.
    Any additional points how to proceed next are welcome.
    Regards,
    Miha

    I think it might be beneficial to take a look at the array performance statistics. The evaperf can be found at c:\Program Files\Hewlett-Packard\EVA Performance Monitor if it is selected during CommandView installation. Some useful stats to collect are:
    evaperf cs –cont 60 -dur 3600 -csv -ts2 -sz xxxx-xxxx-xxxx-xxxx > controller.csv
    evaperf hps –cont 60 -dur 3600 -csv -ts2 -sz xxxx-xxxx-xxxx-xxxx > hostport.csv
    evaperf vd –cont 60 -dur 3600 -csv -ts2 -sz xxxx-xxxx-xxxx-xxxx > vdisk.csv
    evaperf pdg –cont 60 -dur 3600 -csv -ts2 -sz xxxx-xxxx-xxxx-xxxx > diskgroup.csv
    evaperf pda –cont 60 -dur 3600 -csv -ts2 -sz xxxx-xxxx-xxxx-xxxx > disk.csv
    evaperf vdg –cont 60 -dur 3600 -csv -ts2 -sz xxxx-xxxx-xxxx-xxxx > vdiskgroup.csv
    where xxxx-xxxx-xxxx-xxxx is the world wide name of the array e.g. 5000–1FE1–5000–A9F0. You can probably find it on the controller LCD or some serial number label sticker at the back.
    Because window server command prompt does not support running command at the background (at least I do not know how) unlike hp-ux server, you will have to open several command prompt window and run these command at the same time. Then probably zip and posted it somewhere so that we can all take a look.
    the 60 here means we make one collection every 60 seconds, and -dur 3600 means for a duration of 3600 seconds which is one hour. For a start, we do not want to collect huge amount of data. We probably want to collect one set when array has no problem, and one set when array has problem, then make a comparison

  • Lock Up Your Data for Up to 90% Less Cost than On-Premises Solutions with NetApp AltaVault

    June 2015
    Explore
    Data-Protection Services from NetApp and Services-Certified Partners
    Whether delivered by NetApp or by our professional and support services certified partners, these services help you achieve optimal data protection on-premises and in the hybrid cloud. We can help you address your IT challenges for protecting data with services to plan, build, and run NetApp solutions.
    Plan Services—We help you create a roadmap for success by establishing a comprehensive data protection strategy for:
    Modernizing backup for migrating data from tape to cloud storage
    Recovering data quickly and easily in the cloud
    Optimizing archive and retention for cold data storage
    Meeting internal and external compliance regulations
    Build Services—We work with you to help you quickly derive business value from your solutions:
    Design a solution that meets your specific needs
    Implement the solution using proven best practices
    Integrate the solution into your environment
    Run Services—We help you optimize performance and reduce risk in your environment by:
    Maximizing availability
    Minimizing recovery time
    Supplying additional expertise to focus on data protection
    Rachel Dines
    Product Marketing, NetApp
    The question is no longer if, but when you'll move your backup-and-recovery storage to the cloud.
    As a genius IT pro, you know you can't afford to ignore cloud as a solution for your backup-and-recovery woes: exponential data growth, runaway costs, legacy systems that can't keep pace. Public or private clouds offer near-infinite scalability, deliver dramatic cost reductions and promise the unparalleled efficiency you need to compete in today's 24/7/365 marketplace.
    Moreover, an ESG study found that backup and archive rank first among workloads enterprises are moving to the cloud.
    Okay, fine. But as a prudent IT strategist, you demand airtight security and complete control over your data as well. Good thinking.
    Hybrid Cloud Strategies Are the Future
    Enterprises, large and small, are searching for the right blend of availability, security, and efficiency. The answer lies in achieving the perfect balance of on-premises, private cloud, and public services to match IT and business requirements.
    To realize the full benefits of a hybrid cloud strategy for backup and recovery operations, you need to manage the dynamic nature of the environment— seamlessly connecting public and private clouds—so you can move your data where and when you want with complete freedom.
    This begs the question of how to integrate these cloud resources into your existing environment. It's a daunting task. And, it's been a roadblock for companies seeking a simple, seamless, and secure entry point to cloud—until now.
    Enter the Game Changer: NetApp AltaVault
    NetApp® AltaVault® (formerly SteelStore) cloud-integrated storage is a genuine game changer. It's an enterprise-class appliance that lets you leverage public and private clouds with security and efficiency as part of your backup and recovery strategy.
    AltaVault integrates seamlessly with your existing backup software. It compresses, deduplicates, encrypts, and streams data to the cloud provider you choose. AltaVault intelligently caches recent backups locally while vaulting older versions to the cloud, allowing for rapid restores with off-site protection. This results in a cloud-economics–driven backup-and-recovery strategy with faster recovery, reduced data loss, ironclad security, and minimal management overhead.
    AltaVault delivers both enterprise-class data protection and up to 90% less cost than on-premises solutions. The solution is part of a rich NetApp data-protection portfolio that also includes SnapProtect®, SnapMIrror®, SnapVault®, NetApp Private Storage, Cloud ONTAP®, StorageGRID® Webscale, and MetroCluster®. Unmatched in the industry, this portfolio reinforces the data fabric and delivers value no one else can provide.
    Figure 1) NetApp AltaVault Cloud-integrated Storage Appliance.
    Source: NetApp, 2015
    The NetApp AltaVault Cloud-Integrated Storage Appliance
    Four Ways Your Peers Are Putting AltaVault to Work
    How is AltaVault helping companies revolutionize their backup operations? Here are four ways your peers are improving their backups with AltaVault:
    Killing Complexity. In a world of increasingly complicated backup and recovery solutions, financial services firm Spot Trading was pleased to find its AltaVault implementation extremely straightforward—after pointing their backup software at the appliance, "it just worked."
    Boosting Efficiency. Australian homebuilder Metricon struggled with its tape backup infrastructure and rapid data growth before it deployed AltaVault. Now the company has reclaimed 80% of the time employees formerly spent on backups—and saved significant funds in the process.
    Staying Flexible. Insurance broker Riggs, Counselman, Michaels & Downes feels good about using AltaVault as its first foray into public cloud because it isn't locked in to any one approach to cloud—public or private. The company knows any time it wants to make a change, it can.
    Ensuring Security. Engineering firm Wright Pierce understands that if you do your homework right, it can mean better security in the cloud. After doing its homework, the firm selected AltaVault to securely store backup data in the cloud.
    Three Flavors of AltaVault
    AltaVault lets you tap into cloud economics while preserving your investments in existing backup infrastructure, and meeting your backup and recovery service-level agreements. It's available in three form factors: physical, virtual, and cloud-based.
    1. AltaVault Physical Appliances
    AltaVault physical appliances are the industry's most scalable cloud-integrated storage appliances, with capacities ranging from 32TB up to 384TB of usable local cache. Companies deploy AltaVault physical appliances in the data center to protect large volumes of data. These datasets typically require the highest available levels of performance and scalability.
    AltaVault physical appliances are built on a scalable, efficient hardware platform that's optimized to reduce data footprints and rapidly stream data to the cloud.
    2. AltaVault Virtual Appliances for Microsoft Hyper-V and VMware vSphere
    AltaVault virtual appliances are an ideal solution for medium-sized businesses that want to get started with cloud backup. They're also perfect for enterprises that want to safeguard branch offices and remote offices with the same level of protection they employ in the data center.
    AltaVault virtual appliances deliver the flexibility of deploying on heterogeneous hardware while providing all of the features and functionality of hardware-based appliances. AltaVault virtual appliances can be deployed onto VMware vSphere or Microsoft Hyper-V hypervisors—so you can choose the hardware that works best for you.
    3. AltaVault Cloud-based Appliances for AWS and Microsoft Azure
    For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, cloud-based AltaVault appliances on Amazon Web Services (AWS) and Microsoft Azure are key to enabling cloud-based recovery.
    On-premises AltaVault physical or virtual appliances seamlessly and securely back up your data to the cloud. If the primary site is unavailable, you can quickly spin up a cloud-based AltaVault appliance in AWS or Azure and recover data in the cloud. Usage-based, pay-as-you-go pricing means you pay only for what you use, when you use it.
    AltaVault solutions are a key element of the NetApp vision for a Data Fabric; they provide the confidence that—no matter where your data lives—you can control, integrate, move, secure, and consistently manage it.
    Figure 2) AltaVault integrates with existing storage and software to securely send data to any cloud.
    Source: NetApp, 2015
    Putting AltaVault to Work for You
    Four common use cases illustrate the different ways that AltaVault physical and virtual appliances are helping companies augment and improve their backup and archive strategies:
    Backup modernization and refresh. Many organizations still rely on tape, which increases their risk exposure because of the potential for lost media in transport, increased downtime and data loss, and limited testing ability. AltaVault serves as a tape replacement or as an update of old disk-based backup appliances and virtual tape libraries (VTLs).
    Adding cloud-integrated backup. AltaVault makes a lot of sense if you already have a robust disk-to-disk backup strategy, but want to incorporate a cloud option for long-term storage of backups or to send certain backup workloads to the cloud. AltaVault can augment your existing purpose-built backup appliance (PBBA) for a long-term cloud tier.
    Cold storage target. Companies want an inexpensive place to store large volumes of infrequently accessed file data for long periods of time. AltaVault works with CIFS and NFS protocols, and can send data to low-cost public or private storage for durable long-term retention.
    Archive storage target. AltaVault can provide an archive solution for database logs or a target for Symantec Enterprise Vault. The simple-to-use AltaVault management platform can allow database administrators to manage the protection of their own systems.
    We see two primary use cases for AltaVault cloud-based appliances, available in AWS and Azure clouds:
    Recover on-premises workloads in the cloud. For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, AltaVault cloud-based appliances are key to enabling cloud-based disaster recovery. Via on-premises AltaVault physical or virtual appliances, data is seamlessly and securely protected in the cloud.
    Protect cloud-based workloads.  AltaVault cloud-based appliances offer an efficient and secure approach to backing up production workloads already running in the public cloud. Using your existing backup software, AltaVault deduplicates, encrypts, and rapidly migrates data to low-cost cloud storage for long-term retention.
    The benefits of cloud—infinite, flexible, and inexpensive storage and compute—are becoming too great to ignore. AltaVault delivers an efficient, secure alternative or addition to your current storage backup solution. Learn more about the benefits of AltaVault and how it can give your company the competitive edge you need in today's hyper-paced marketplace.
    Rachel Dines is a product marketing manager for NetApp where she leads the marketing efforts for AltaVault, the company's cloud-integrated storage solution. Previously, Rachel was an industry analyst for Forrester Research, covering resiliency, backup, and cloud. Her research has paved the way for cloud-based resiliency and next-generation backup strategies.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    You didn't say what phone you have - but you can set it to update and backup and sync over wifi only - I'm betting that those things are happening "automatically" using your cellular connection rather than wifi.
    I sync my email automatically when I have a wifi connection, but I can sync manually if I need to.  Downloads happen for me only on wifi, photo and video backup are only over wifi, app updates are only over wifi....check your settings.  Another recent gotcha is Facebook and videos.  LOTS of people are posting videos on Facebook and they automatically download and play UNLESS you turn them off.  That can eat up your data in a hurry if you are on FB regularly.

  • Steps how to upgrade HP EVA 6400 Storage Upgrade/ 3*Disk Enclosure/ 28*600GB 15K FC HDD

    I want to know how to upgrade HP EVA 6400 Storage Upgrade/ 3*Disk Enclosure/ 28*600GB 15K FC HDD and Memory upgrade on  Existing X3800 Network Storage Systems.

    This appears to be for a commercial or non-consumer setup.
    You may get a better response from here.
    ↙-----------How do I give Kudos?| How do I mark a post as Solved? ----------------↓

  • Windows 2008 R2 Failover Cluster - Netapp FAS6280 Black Screen on Restart when in cluster IGROUP

    Hello everyone,
    I've been working on this issue for a month now and have tickets open with Netapp and Microsoft.  We have not been able to resolve the issue as of yet and since I'm losing all my hair I thought I should post it...
    Scenario:
    Two Windows 2008 R2 Enterprise SP Fail-Over Clusters (C1 & C2)
    Two Netapp FAS6280 Filers for SAN storage
    8 DELL R815 servers w/ 2x QLE2562 8GB FC HBAs all paths used (9.1.9.47 - newer did not resolve)
    Netapp DSM 4.0 & SnapManager 6.4.2 (we also tried 1 rev newer, 1 rev older without luck)
    C1 has 2 nodes & C2 has 4 nodes.
    We are trying to add new nodes into the clusters.  We were originally on EMC Clariion arrays and have been migrated to the netapp.  Prior to moving we had 3 nodes in C2 and added node 4.
    Now we want to add 2 more nodes, one to each cluster.  We were able to configure the OS and get everything installed and matching the other nodes.  we can assign HBAs to a test IGROUP and assign a lun, format it use it restart all is well and normal. 
    As soon as we change the HBAs to the IGROUP for C1 or C2's LUNs the servers don't want to restart properly.  They hang at a black screen.  The little green windows start splash screen runs the bar at the bottom.  it then is supposed to fade
    out and come back with the grey screen saying starting windows but that doesn't always happen.  Sometimes it can take 4 hours to restart, or the machine will not come back onine at all.  The pre-existing nodes *do not* experience this issue. 
    They restart in 5-10 minutes without issue.
    We know drivers are part of the issue, but the maddening part is that it all works fine until we try to add the HBAs to the existing cluster storage group.  We have a suggestion to create separate IGROUPs for each cluster node and add all luns to the
    separate igroups.  We configured this on one node and it still restarts long.  We plan to modify the other nodes during our maintenance window and try again.
    On the other cluster we were able to successfully complete the validate wizard and join.  everything fails over and works except the server does not come up upon restart.  If I unplug the HBAs it will come up fine, then I can plug them in and the
    cluster functions.
    Does anyone have any other ideas for us to try?  Currently support for both netapp and Microsoft appear stumped.  Sunday is our Window and we plan to try reconfiguring but I'm losing optimism...
    Thanks for any feedback/ideas!
    -Ryan

    Thanks everyone who looked at this and offered feedback.
    The issue appears to be resolved after the configuration change suggested by Netapp support. 
    The originally recommended configuration for our storage migration was similar to how I've seen storage groups configured on EMC arrays in the past.  1 storage group / igroup with all hbas/initiators for each host in the WFC/SQL cluster added to this
    group.  that group then gets all LUNs assigned to it. 
    The change in configuration that was recommended is to have 1 storage group / igroup for each host and put all initiators/hbas for each host in their own separate group.  Then assign the all LUNs to each of the individual host igroups. 
    There is a lack of documentation on this, but it appears to work.  We had 1 hit for separate igroups but had to go through a couple of support techs to get one to confirm this.
    After applying this configuration in our change window today the issue is resolved.  we were able to fail all resources onto the new node and restart it without issue.  The question remaining for us is why the old systems continued to work while
    the new nodes did not.  The old nodes were originally migrated off an EMC and the version of ONTapp at that time was 8.1.2 and they worked fine.
    We had upgraded to 8.1.3 to resolve an issue with flash cache and it seems like machines that we tried to attach to this igroup after that time did not work.  all drivers software etc the same.  We are wondering if this is somehow related and have
    asked about it.
    I wanted to update this thread, and perhaps it can save someone else some trouble down the line.
    -Ryan

  • In desperate need of config assistance for 6513 trunking to Netapp controller

    We are building a new SAN using a Netapp Fas3160 with 2 controllers in failover mode. We have 1 6513 switch they will connect to for the etherchannels. Each Netapp controller will need an LACP port channel with 4 gig interfaces in each running to the 6513.  I have tried to set up the port channels on the 6513 by adding the interfaces into them with the following port config.  The channel comes up fine, but routing to the netapp fails immediately after bringing up the trunk, and the port channel will eventually show down/down but the individual interfaces stay up/up.  I have tried creating the trunks using the mode "on" command also and it will not stay up either.I am at a loss as to why the channels quit routing and eventually go down.
    partial cisco config
    interface Port-channel10
    switchport
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    spanning-tree guard loop
    interface GigabitEthernet1/29
    description NetApp
    switchport
    switchport access vlan 34
    switchport mode trunk
    switchport nonegotiate
    spanning-tree portfast trunk
    spanning-tree guard loop
    channel-protocol lacp
    channel-group 10 mode active
    Anyone with any experience of this type, please help.  This should not be that hard, but the Netapp doco has conflicting info for modes, etc.  I can provide more detail if someone needs it.

    Hi, here is my config for my trunk from a Cisco 4507R switch trunking to a NetApp FAS2050:
    interface GigabitEthernet5/14
    description NetApp Controller
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    channel-group 22 mode active
    end
    UK-LON-SW01#sh run int gi6/14
    Building configuration...
    Current configuration : 183 bytes
    interface GigabitEthernet6/14
    description NetApp Controller
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    channel-group 22 mode active
    end
    UK-LON-SW01#sh run int po22
    Building configuration...
    Current configuration : 149 bytes
    interface Port-channel22
    description NetApp
    switchport
    switchport trunk encapsulation dot1q
    switchport mode trunk
    switchport nonegotiate
    My initial troubles in getting the port-channel to come up were related to the config SAN admin did on the netapp controller, the cisco config is pretty basic/straightforward.
    hope that helps.
    Ashar.

  • Snapshot Backups on HP EVA SAN

    Hi everyone,
    We are implementing a new HP EVA SAN for our SAP MaxDB Wintel environment.  As part of the SAN setup we will be utilising the EVAs snapshot technology to perform a nightly backup.
    Currently HP Data Protector does not support MaxDB for its "Zero Downtime Backup" concept (ZDB), thus we need to perform LUN snapshots using the EVAs native commands.  ZDB would have been nice as it integrates into SAP and lets the DB/SAP know when a snapshot backup has occurred.  However as I mentioned this feature is not available on MaxDB (only SAP on Oracle).
    We are aware that SAP supports snapshots on external storage devices as stated in OSS notes 371247 and 616814.
    To perform the snapshot we would do something similar (if not exactly) like note 616814 describes as below:
    To create the split mirror or snapshot, proceed as follows:
                 dbmcli -d <database_name> -u < dbm_user>,<password>
                      util_connect < dbm_user>,<password>
                      util_execute suspend logwriter
                   ==> Create the snapshot on the EVA
                      util_execute resume logwriter
                      util_release
                      exit
    Obviously MaxDB and SAP are unaware that a "backup" has been performed.  This poses a couple of issues that I would like to see if anyone has a solution too.
    a.  To enable automatic log backup MaxDB must know that it has first completed a "full" backup.  Is it possible to have MaxDB be aware that a snapshot backup has been taken of the database, thus allowing us to enable automatic log backup?
    b.  SAP also likes to know its been backed up also.  Earlywatch Alert reports start to get a little upset when you don't perform a backup on the system for awhile.
    Also DB12 will mention that the system isn't in a recoverable state, when in fact it is.  Any work arounds available here?
    Cheers
    Shaun

    Hi Shaun,
    interesting thread sofar...
    > It would be nice to see HP and SAP(MaxDB) take the snapshot technology one or two steps further, to provide a guaranteed consistent backup, and can be block level verified.  I think HPs ZDB (zero downtime backup eg snapshots) technology for SAP on Oracle using Data Protector does this now?!??!
    Hmm... I guess the keyword here is 'market'. If there is enough market potential visible, I tend to believe that both SAP and HP would happily try to deliver such tight integration.
    I don't know how this ZDB stuff works with Oracle, but how could the HP software possibly know how a Oracle block should look like?
    No, there are just these options to actually check for block consistency in Oracle:  use RMAN, use DBV or use SQL to actually read your data (via EXP, EXPDB, ANALYZE, custom SQL)
    Even worse, you might come across block corruptions that are not covered by these checks really.
    > Data corruption can mean so many things.  If your talking structure corruption or block corruption, then you do hope that your consistency checks and database backup block checks will bring this to the attention of the DBA.  Hopefully recovery of the DB from tape and rolling forward would resolve this.
    Yes, I was talking about data block corruption. Why? Because there is no reliable way to actually perform a semantic check of your data. None.
    We (SAP) simply rely on that, whatever we write to the database by the Updater is consistent from application point of view.
    Having handled far too much remote consulting messages concerning data rescue due to block corruptions I can say: getting all readable data from the corrupt database objects is really the easy part of it.
    The problems begin to get big, once the application developers need to think of reports to check and repair consistency from application level.
    > However if your talking data corruption as is "crap data" has been loaded into the database, or a rogue ABAP has corrupted several million rows of data then this becomes a little more tricky.  If the issue is identified immediately, restoring from backup is a fesible option for us.
    > If the issue happened over 48hrs ago, then restoring from a backup is not an option.  We are a 24x7x365 manufacturing operation.  Shipping goods all around the world.  We produce and ship to much product in a 24hr window that can not be rekeyed (or so the business says) if the data is lost.
    Well in that case you're doomed. Plain and simple. Don't put any effort into getting "tricky", just let never ever run any piece of code that had not passed the whole testfactory. That's really the only chance.
    > We would have to get tricky and do things such as restore a copy of the production database to another server, and extract the original "good" documents from the copy back into the original, or hopefully the rogue ABAP can correct whatever mistake they originally made to the data.
    That's not a recovery plan - that is praying for mercy.
    I know quite a few customer systems that went to this "solution" and had inconsistencies in their system for a long long time afterwards.
    > Look...there are hundreds of corruption scenarios we could talk about, but each issue will have to be evaluated, and the decision to restore or not would be decided based on the issue at hand.
    I totally agree.
    The only thing that must not happen is: open a callconference and talk about what a corruption is in the first place, why it happened, how it could happen at all ... I spend hours of precious lifetime in such non-sense call confs, only to see - there is no plan for this at customer side.
    > I would love to think that this is something we could do daily to a sandpit system, but with a 1.7TB production database, our backups take 6hrs, a restore would take about 10hrs, and the consistency check ... well a while.
    We have customers saving multi-TB databases in far less time - it is possible.
    > And what a luxury to be able to do this ... do you actually know of ANY sites that do this?
    Quick Backups? Yes, quite a few. Complete Backup, Restore, Consistency Check cycle? None.
    So why is that? I believe it's because there is no single button for it.
    It's not integrated into the CCMS and/or the database management software.
    It might also be (hopefully) that I never hear of these customers. See as a DB Support Consultant I don't get in touch with "sucess stories". I see failures and bugs all day.
    To me the correct behaviour would be to actually stop the database once the last verified backup is too old. Just like everybody is used to it, when he hits a LOGFULL /ARCHIVER STUCK situation.
    Until then - I guess I will have a lot more data rescue to do...
    > Had a read  ...  being from New Zealand I could easily relate to the sheep =)
    > Thats not wan't I meant.  Like I said we are a 24x7x365 system.  We get a maximum of 2hrs downtime for maintenance a month.  Not that we need it these days as the systems practically run themselves.  What I meant was that between 7am and 7pm are our busiest peak hours, but we have dispatch personnel, warehouse operations, shift supervisors ..etc.. as well as a huge amount of batch running through the "night" (and day).  We try to maintain a good dialog response during the core hours, and then try to perform all the "other" stuff around these hours, including backups, opt stats, and business batch, large BI extractions ..etc..
    > Are we busy all day and night ... yes ... very.
    Ah ok - got it!
    Especially in such situations I would not try to implement consistency checks on your prod. database.
    Basically running a CHECK DATA there does not mean anything. Right after a table finished the check it can get corrupted although the check is still running on other tables. So you have no guranteed consistent state in a running database - never really.
    On the other hand, what you really want to know is not: "Are there any corruptions in the database?" but "If there would be any corruptions in the database, could I get my data back?".
    This later question can only be answered by checking the backups.
    > Noted and agreed.  Will do daily backups via MaxDB kernel, and a full verification each week.
    One more customer on the bright side
    > One last question.  If we "restored" from an EVA snapshot, and had the DB logs upto the current point-in-time, can you tell MaxDB just to roll forward using these logs even though a restore wasn't initiated via MaxDB?
    I don't see a reason why not - if you restore the data and logarea and bring the db to admin mode than it uses the last successfull savepoint for startup.
    If you than use recover_start to supply more logs that should work.
    But as always this is something that needs to be checked on your system.
    That has been a really nice discussion - hope you don't get my comments as offending, they really aren't meant that way.
    KR Lars

  • File Share crawl - NetApp Servers - Permission Issues

    hello,
    There are many file share contents hosted on NetApp, which are being crawled by the SharePoint 2013 Search engine in our organization.
    The Search Crawl account is granted Read and Execute permissions on the Shared Drive. The crawler reads the content and the permissions correctly for the first time.
    But once the permissions are modified, like adding new users or removing the existing users, and after triggering a full crawl, the changes w.r.t permissions are not being reflected on the sharepoint search results. However, the same scenario works fine
    on a local windows shared drive.
    I have learnt that the crawl account should be a part of the group policy Manage Audting and permissions, but also learnt that NetApp doesnt have such a thing.  I hope am not the only one with such an issue. Please suggest.
    Sreeharsha Alagani | MCTS Sharepoint 2010 |
    Linkedin | Blog

    Hi Sreeharsha,
    As I understand, you are using NetApp for file share, and you would like to know about permission needed for crawl files.
    Since the issue is related to third party product, there is no sufficient resource here. Please contact their support engineer. For your convenience:
    https://communities.netapp.com/welcome
    In addition, I found some articles might help:
    https://communities.netapp.com/community/netapp-blogs/msenviro/blog/2013/10/30/best-practices-for-sharepoint-2013-search-service-application-for-smsp
    https://kb.netapp.com/support/index?page=content&id=3013718&pmv=print&impressions=false
    Regards,
    Rebecca Tu
    TechNet Community Support

  • Mounting a VMware netapp backup to retrieve deleted files

    I've got a 2012 R2 server running a managed document suite (Laserfiche) on VMware vSphere Client 5.1 and some folders on the managed document software were deleted and need to be recovered.  Therefore I need to restore\mount a netapp backup of this VM from 6 or 7 days ago to recover the data (while the current server is still running) and I'm wondering what would be the best approach to take?
    I've tried creating a new VM based upon the .vmdk file in the data store from the proper backup date with no joy, receiving an "Insufficient permission to access file" error.
    This topic first appeared in the Spiceworks Community

    Hello ProjectExtraSpa
    Unfortunately once a file has been deleted from the unit it is gone.  There is not a "recycle bin" or undo deletion option even in the web interface.  
    For issues such as this keeping a valid backup in place would prevent human errors like this to cause data loss.
    If the data on the unit is the only copy of the data, then it is not considered a "backup" even with RAID 1 running it would be considered primary/active storage.   RAID is not the same as a backup.  
    The recommended backup practice is to have two local copies of the data and a remote/offsite backup. i.e. the original copy on your computer or primary storage, a local backup such as usb drives or even another NAS/computer, then a offsite backup such as a remote NAS, cloud storage, periodically updated usb drive, etc.
    If a good backup strategy is in place it becomes difficult for data loss by accidents, natural disasters, or malintent to occur.
    LenovoEMC Contact Information is region specific. Please select the correct link then access the Contact Us at the top right:
    US and Canada: https://lenovo-na-en.custhelp.com/
    Latin America and Mexico: https://lenovo-la-es.custhelp.com/
    EU: https://lenovo-eu-en.custhelp.com/
    India/Asia Pacific: https://lenovo-ap-en.custhelp.com/
    http://support.lenovoemc.com/

  • NETApp and Cisco 5000s

    I am in the designing stages of a data center where we want to use NEXUS 5000s, NETApps 3140s and NEXUS 2232s for FC0E. From my understanding these devices inter operate like so:
    Servers will have converged NICS that use normal CAT 6 to connect to the 2332s
    The cabling between the 5ks and the 2ks will be FET (fabric extender transciever)
    The cabling between the 7ks and the 5ks is TWINAX
    So my question would be how does the NETAPP interoperate with the 5Ks? What is the cabling? Someone asked me if I was going to use a storage access layer device wi the NETAPPs. Is that necessary? Recommended?
    Thanks for any help or advice you can give,
    P.

    1) The NetApp Target CNA's support attachment via SFP+ and fiber optics or via twinax cable.
    Note that this depends on which part you've got installed :
    X1139A-R6 - fiber (QLE8142)
    X1140A-R6 - copper (QLE8152)
    2) For interoperability and some configuration examples please refer to the following documents.
    NetApp Interoperability Matrix Tool (IMT)
    https://now.netapp.com/NOW/products/interoperability/
    Fibre Channel over Ethernet (FCoE) End-to-End Deployment Guide
    http://media.netapp.com/documents/TR-3800.pdf

  • Netapp All-Flash FAS (AFF) – What Does This Mean?

    by NetApp A-Team Member Glenn Dekhayser, Practice Lead, Office of the CTO, Red8
    A bunch of my contemporaries have published excellent technical blogs on NetApp’s recent release of their All-Flash FAS systems and simultaneous massive reduction in the acquisition and support prices of those same systems. So there’s excellent new info on how great these platforms perform, and how their costs are now in-line (or better) than competitors who have been flash-focused for a while now. Assuming the performance is there (and based on performance numbers I’m seeing in real-world scenarios, it is), and those costs are well understood, does this development mean anything important to the storage or converged infrastructure market? You betcha.

    If there are problems with updating or with the permissions then easiest is to download the full version and trash the currently installed version to do a clean install of the new version.
    Download a new copy of the Firefox program and save the DMG file to the desktop
    * Firefox 6.0.x: http://www.mozilla.com/en-US/firefox/all.html
    * Trash the current Firefox application to do a clean (re-)install
    * Install the new version that you have downloaded
    Your profile data is stored elsewhere in the Firefox Profile Folder, so you won't lose your bookmarks and other personal data.
    * http://kb.mozillazine.org/Profile_folder_-_Firefox

  • Sun Cluster with Netapps - iSCSI quorum and network port

    I am proposing Sun cluster with Netapps 3020C.
    May I know
    1) OS is Solaris 9. The SUN OSP says that we need to obtain an iSCSI license from Netapps. Is this the iSCSI initiator software for Solaris 9 to talk to the NAS quorum? Or do I need to purchased a 3rd party iSCSI initiator ?
    2) We provide 2 network ports for the Netapps private NAS LAN. Is it a must to cater another dedicated network port for the iSCSI communication with the quorum?
    3) If we need purchase a 3rd party iSCSI initiator, where can we get this? I have checked Qlogic and Cisco, they are both not suitable for my solution.
    Appreciate your help

    Hi,
    1) OS is Solaris 9. The SUN OSP says that we need to
    obtain an iSCSI license from Netapps. Is this the
    iSCSI initiator software for Solaris 9 to talk to the
    NAS quorum? Or do I need to purchased a 3rd party
    iSCSI initiator ?Have a look at http://docs.sun.com/app/docs/doc/817-7957/6mn8834r2?a=view
    I read the "Requirements When Configuring NAS Devices as Quorum Devices"
    section as this is the license for the iSCSI inititator software.
    So you need to enable iSCSI on the netapps box and need to install a package from netapps (NTAPclnas) on the cluster nodes.
    2) We provide 2 network ports for the Netapps
    private NAS LAN. Is it a must to cater another
    dedicated network port for the iSCSI communication
    with the quorum?Have a look at http://docs.sun.com/app/docs/doc/819-0580/6n30eahcc?a=view#ch4_quorum-9
    I don't read such a requirement there.
    3) If we need purchase a 3rd party iSCSI initiator,
    where can we get this? I have checked Qlogic and
    Cisco, they are both not suitable for my solution.
    Appreciate your helpI don't thibk you need such a 3rd party iSCSI initiator, unless this is stated in the above docs.
    Greets
    Thorsten

Maybe you are looking for

  • Hard drive and Time Machine "Invalid Node" Error

    All, I have a late 2006 iMac, which I know must be at the end of its life now, but at the weekend my main internal HD developed an error, and it wouldn't boot, even from the recovery partition. I managed to find my old Snow Leapard disks and ran Disk

  • How do I have different music on my ipad than my iphone

    I just got an IPad for my girls for Christmas.  I currently have an iphone4 with a bunch of my music on it.  I do not want this music to sync to the girls' Ipad (think of how Rage Against the Machine lyrics will sound to an 8 year old!!!), but it did

  • Select SUM( ) in inner join - error in code

    Hi All, The following code is using ABAP OO and I get the following error E:The addition "FOR ALL ENTRIES" excludes all aggregate functions with the exception of "COUNT( * )", as the single element of the SELECT clause. Here is the Code: select a/bic

  • Heap space out of memory in weblogic

    Hi, I am using weblogic server 10.3.2,Actually i am facing heap space error because lot of data is there is the database approx 50lacs Data. And my system configuration is 8GB RAM and 64bit machine. but after 5 minutes heap space error... Note:I have

  • Can I change the default settings for InfoView?

    Hi, I have a Crystal Reports Server 2008 running. The users access their reports via the InfoView web application. In this application each user can configure some settings, like whether to use the Acrobat Reader or an ActiveX control for printing. D