Solaris 10 to NetApp FAS 2020

Hello,
Was wondering what the hive mind thought about connecting a SUN T5120 Server via two fiber channel (Qlogic) cards to a NetApp FAS 2020.
We hope to use Multipathing native to the Solaris 10 OS...
When I issue this command I don't see NetApp as a supported device... is this a problem? That is my big first question. (The Qlogic HBA FiberChannel PCI cards aren't installed yet.. maybe that's the prob?)
mpathadm show mpath-support libmpscsi_vhci.so
(Lists a bunch of devices but no NetApp FAS 2020)...
Thanks!!!!
- Eledor StarFire

I know, I'm already doing that. But, that's the rawest of workarounds. Twenty years ago, we called that the "Henny Youngman solution": "Doc, it hurts when I move my arm like this!" "Well, don't move your arm like that!"
What's the solution? Is there a patch for the NetApp, Solaris 10, or what settings are needed to fix it, if any? We sure could use NFS4 support of things like platform-independent ACL definitions. ...

Similar Messages

  • Big delays reading or saving files from Nas BetApp fas 2020

    I support a Newsletters magazine. They own 5 Imacs 20" -mid 2007, and early 2008. running Mac OS 10.6, and Adobe Creativ suit 4. From time to time, they experience large delays reading or writing files to the Nas NetApp fas 2000. The network is 1 GB. they don't have this problem on PC's.
    Any idea how to stop these delays?

    Are you absolutely sure the network is a bona fide gigabit one all the way?
    We had problems with files taking forever to load from a network share. It turned out, that the IP phone which fed our Mac Pro through its built in switch was the culprit. Its switch was a mere 100Mbit capable one.

  • VAAI Unmap delete status: unsupported NetApp FAS Array DataOntap 8.2.1

    Hi All Quick run up of our environment: VMware V5.0.0 Update 3 Build 1918656 with vCenter 5.0.0 U3. NetApp FAS8020 running Data ONTAP 8.2.1 7-Mode using Thin Provisioning Flex VolumesUsing VSC 4.2.2 plugin    For purposes of test this piece I created a thin provisioned 100GB LUN with a 55GB thin provisioned VM I am looking to use the unmap to reclaim space from our thin provisioned volumes on our NetApp, we are using thin provisioned VM disks however some are thick provisioned VMs as well. When I mean the volume is thin provisioned I mean when you setup a new volume on the NetApp array and tick the option to thin provision the volume as shown below:  however I have run into a few problems and am struggling to understand why I am unable to reclaim space on a test thin provisioned volume I created. As a test I storage vmotioned a VM no longer needed to a test volume which utilised the space on the LUN, I then storage vmotioned it back to another volume. it removed the space on the VMware datastore side however not on the volume capacity on the storage. However having a read of VAAI in detail I am unsure why the below is showing as thin provisioning status: unknown. I can confirm the volume is thin provisioned and NOT thick provisioned.  When looking at the VAAI status it shows the delete as unsupported. So I attempt to use the vmkfstools command to reclaim the space:  cd/vmfs/volumes/55965500-d4fc0353-ce8c-c4346bbae697 When I tried the vmkfstools -y percentage_of_deleted_blocks_to_reclaim command it looked like it worked, tried it few different times with different %Volume status on NetApp side shows that the space hasn't been recovered and still shows at 39% used, with 61.28GB even though there are no VMs on this volume. On VMware side, it shows as space feed as expected. Does anyone know why I might be seeing this behaviour? I know on earlier versions of 5.0 there was issues and it has to be run manually however I am on a much later release of 5.0 and I thought the vmfkstool utility allowed it to be run manually? Any help would be most appreciated.
    Thanks 

    I have found this article which relates to what I am seeing: VMware KB: Thin Provisioning Block Space Reclamation (VAAI UNMAP) does not work  When I check the VMware compatibility matrix it is showing as VAAI thin provisioning space reclamation is not supported. I am using a FAS8020 array, I would have thought that this SAN considering the age would support this feature? or is it an issue elsewhere with the environment? Looking at whether I upgrade to a newer version of ONTAP or VMware ESXi it doesn't make any difference. Is the Thin Provisioning space reclamation only a feature on very high end SANs? Is there anything I can use to reclaim space? Thanks

  • Solaris 8 SFS 4.4.13 mpxio question

    I have a 25k domain running Sol 8 02/04 with two qlc 2340 HBAs attached to two cisco 9509s getting a single lun from a netapp fas.
    I installed the OS, added the latest recommended patch cluster, then installed SFS 4.4.13. The netapp lun is presented to each of the HBA paths, with the hope to use mpxio across all 4 HBA connections. I can see all the devices at this point, but I cannot get them under traffic manager control, they just have the typical dev paths and not the vhci paths you would expect.
    I ran a cfgadm -c configure on them and luxadm comes back looking good. I am concerned that I might not have the correct entries in my kernel/drv/scsi_vhci.conf I am not booting from this lun, the system has a seperate local attached S3100.
    So, in a nutshell, I have 4 paths ( 8 with the dual paths on the netapps pri/failover ) to a single lun, and want to bind them into one virtual path. Has anyone run in a similar config, and does anyone have an example of the files I need to edit that is specific to a "NETAPP LUN" device (section of vhci.conf etc) as reported by luxadm inquiry?
    tia
    Edited by: mpulliam on Dec 16, 2009 2:13 PM

    Update:
    I shutdown the domain, pulled the boot disk, and replaced it with a clean drive. I built the system with Solaris 10 05/08 then patched with the latest reccomended patch cluster. My HBA is a Qlogic 2342 connecting to Cisco 9509 switches connected to a NetApp fas box (not 100% sure on the model atm, but very high end).
    I configured the same lun I had on the solaris 8 box then mounted it up. All my data files are intact so I began some testing to see if the performance was any different.
    I ran several timed sequential 10Gb writes and hit a max throughput of about 170Mb a sec on the lun. This is rather sad really, as it's not even hitting 2Gb FC speeds. I really punished the system with several parallel 10Gb writes, 10Gb reads, and 4k writes and reads. I never really achieved more than about 15k IOPs to the lun, and never exceded peak 190Mb a second. Under the heavy load the device queue really stacked up. The filesystem is UFS on the lun.
    We're going to try some different lun setups on the netapp to increase spindle count, but I'm really not sure it will help in any way.
    Does anyone have any experience with a similar setup? Perhaps changing settings in the qlogic driver conf, like frame size perhaps? How about any ideas to peg down the performance culprit (server, switch, storage)?
    tia

  • Solaris 8 ignores /etc/dt/config/C/Xresources

    i'm trying to load that resource for all users:
    XTerm*scrollBar: true
    i've put it into
    /etc/dt/config/C/Xresources
    /etc/dt/config/Xresources
    /usr/dt/config/C/Xresources
    /usr/dt/config/Xresources
    $HOME/.Xresources
    but it doesn't work, howether loading it manually like xrdb /path/Xresources and then starting xterm again works well (for current session)
    Solaris is just fresh installed
    How to solve this?

    Update:
    I shutdown the domain, pulled the boot disk, and replaced it with a clean drive. I built the system with Solaris 10 05/08 then patched with the latest reccomended patch cluster. My HBA is a Qlogic 2342 connecting to Cisco 9509 switches connected to a NetApp fas box (not 100% sure on the model atm, but very high end).
    I configured the same lun I had on the solaris 8 box then mounted it up. All my data files are intact so I began some testing to see if the performance was any different.
    I ran several timed sequential 10Gb writes and hit a max throughput of about 170Mb a sec on the lun. This is rather sad really, as it's not even hitting 2Gb FC speeds. I really punished the system with several parallel 10Gb writes, 10Gb reads, and 4k writes and reads. I never really achieved more than about 15k IOPs to the lun, and never exceded peak 190Mb a second. Under the heavy load the device queue really stacked up. The filesystem is UFS on the lun.
    We're going to try some different lun setups on the netapp to increase spindle count, but I'm really not sure it will help in any way.
    Does anyone have any experience with a similar setup? Perhaps changing settings in the qlogic driver conf, like frame size perhaps? How about any ideas to peg down the performance culprit (server, switch, storage)?
    tia

  • VPCs for Dual NetApp Controllers

    OK, here is our setup:
    1 3240 NetApp with Dual Controllers (Gen 1)
    2 5548s
    2 2232s
    Servers with Dual Port Emulex CNAs (Gen 2)
    1 4507 to legacy network
    I am trying to figure out how to be redundant as possible, so if we lose a 2k or a 5k the servers can still get to the SAN.
    Questions:
    How are the port channels setup? vPCs sertup? LACP is enabled on the NETApp. Can the 5ks present a single vPC to each controller?
    Thanks for any advice or help you can provide,
    P.

    Run vPC on the two NX5K switches.
    From NetApp FAS, connect one ethernet to first 5K. Connect second ethernet to second 5K.
    Configure port-channel on 5K with member interface going down to NetApp FAS.
    Associate Port-Channel with a vPC ID. The vPC ID must be the same.

  • Making Effective Use of the Hybrid Cloud: Real-World Examples

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, and it was clear that NetApp's approach to hybrid cloud and Data Fabric resonated with the crowd. NetApp solutions such as NetApp Private Storage for Cloud are solving real customer problems.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that allows you to move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    Check out the following blogs for more perspectives:
    Microsoft Ignite Sparks More Innovation from NetApp
    ASR Now Supports NetApp Private Storage for Microsoft Azure
    Four Ways Disaster Recovery is Simplified with Storage Management Standards
    Introducing OnCommand Shift
    SHIFT VMs between Hypervisors
    Infront Consulting + NetApp = Success
    Richard Treadway
    Senior Director of Cloud Marketing, NetApp
    Tom Shields
    Senior Manager, Cloud Service Provider Solution Marketing, NetApp
    Enterprises are increasingly turning to cloud to drive agility and closely align IT resources to business needs. New or short-term projects and unexpected spikes in demand can be satisfied quickly and elastically with cloud resources, spurring more creativity and productivity while reducing the waste associated with over- or under-provisioning.
    Figure 1) Cloud lets you closely align resources to demand.
    Source: NetApp, 2015
    While the benefits are attractive for many workloads, customer input suggests that even more can be achieved by moving beyond cloud silos and better managing data across cloud and on-premises infrastructure, with the ability to move data between clouds as needs and prices change. Hybrid cloud models are emerging where data can flow fluidly to the right location at the right time to optimize business outcomes while providing enhanced control and stewardship.
    These models fall into two general categories based on data location. In the first, data moves as needed between on-premises data centers and the cloud. In the second, data is located strategically near, but not in, the cloud.
    Let's look at what some customers are doing with hybrid cloud in the real world, their goals, and the outcomes.
    Data in the Cloud
    At NetApp, we see a variety of hybrid cloud deployments sharing data between on-premises data centers and the cloud, providing greater control and flexibility. These deployments utilize both cloud service providers (CSPs) and hyperscale public clouds such as Amazon Web Services (AWS).
    Use Case 1: Partners with Verizon for Software as a Service Colocation and integrated Disaster Recovery in the Cloud
    For financial services company BlackLine, availability, security, and compliance with financial standards is paramount. But with the company growing at 50% per year, and periodic throughput and capacity bursts of up to 20 times baseline, the company knew it couldn't sustain its business model with on-premises IT alone.
    Stringent requirements often lead to innovation. BlackLine deployed its private cloud infrastructure at a Verizon colocation facility. The Verizon location gives them a data center that is purpose-built for security and compliance. It enables the company to retain full control over sensitive data while delivering the network speed and reliability it needs. The colocation facility gives Blackline access to Verizon cloud services with maximum bandwidth and minimum latency. The company currently uses Verizon Cloud for disaster recovery and backup. Verizon cloud services are built on NetApp® technology, so they work seamlessly with BlackLine's existing NetApp storage.
    To learn more about BlackLine's hybrid cloud deployment, read the executive summary and technical case study, or watch this customer video.
    Use Case 2: Private, Nonprofit University Eliminates Tape with Cloud Integrated Storage
    A private university was just beginning its cloud initiative and wanted to eliminate tape—and offsite tape storage. The university had been using Data Domain as a backup target in its environment, but capacity and expense had become a significant issue, and it didn't provide a backup-to-cloud option.
    The director of Backup turned to a NetApp SteelStore cloud-integrated storage appliance to address the university's needs. A proof of concept showed that SteelStore™ was perfect. The on-site appliance has built-in disk capacity to store the most recent backups so that the majority of restores still happen locally. Data is also replicated to AWS, providing cheap and deep storage for long-term retention. SteelStore features deduplication, compression, and encryption, so it efficiently uses both storage capacity (both in the appliance and in the cloud) and network bandwidth. Encryption keys are managed on-premises, ensuring that data in the cloud is secure.
    The university is already adding a second SteelStore appliance to support another location, and—recognizing which way the wind is blowing—the director of Backup has become the director of Backup and Cloud.
    Use Case 3: Consumer Finance Company Chooses Cloud ONTAP to Move Data Back On-Premises
    A leading provider of online payment services needed a way to move data generated by customer applications running in AWS to its on-premises data warehouse. NetApp Cloud ONTAP® running in AWS proved to be the least expensive way to accomplish this.
    Cloud ONTAP provides the full suite of NetApp enterprise data management tools for use with Amazon Elastic Block Storage, including storage efficiency, replication, and integrated data protection. Cloud ONTAP makes it simple to efficiently replicate the data from AWS to NetApp FAS storage in the company's own data centers. The company can now use existing extract, transform and load (ETL) tools for its data warehouse and run analytics on data generated in AWS.
    Regular replication not only facilitates analytics, it also ensures that a copy of important data is stored on-premises, protecting data from possible cloud outages. Read the success story to learn more.
    Data Near the Cloud
    For many organizations, deploying data near the hyperscale public cloud is a great choice because they can retain physical control of their data while taking advantage of elastic cloud compute resources on an as-needed basis. This hybrid cloud architecture can deliver better IOPS performance than native public cloud storage services, enterprise-class data management, and flexible access to multiple public cloud providers without moving data. Read the recent white paper from the Enterprise Strategy Group, “NetApp Multi-cloud Private Storage: Take Charge of Your Cloud Data,” to learn more about this approach.
    Use Case 1: Municipality Opts for Hybrid Cloud with NetApp Private Storage for AWS
    The IT budgets of many local governments are stretched tight, making it difficult to keep up with the growing expectations of citizens. One small municipality found itself in this exact situation, with aging infrastructure and a data center that not only was nearing capacity, but was also located in a flood plain.
    Rather than continue to invest in its own data center infrastructure, the municipality chose a hybrid cloud using NetApp Private Storage (NPS) for AWS. Because NPS stores personal, identifiable information and data that's subject to strict privacy laws, the municipality needed to retain control of its data. NPS does just that, while opening the door to better citizen services, improving availability and data protection, and saving $250,000 in taxpayer dollars. Read the success story to find out more.
    Use Case 2: IT Consulting Firm Expands Business Model with NetApp Private Storage for Azure
    A Japanese IT consulting firm specializing in SAP recognized the hybrid cloud as a way to expand its service offerings and grow revenue. By choosing NetApp Private Storage for Microsoft Azure, the firm can now offer a cloud service with greater flexibility and control over data versus services that store data in the cloud.
    The new service is being rolled out first to support the development work of the firm's internal systems integration engineering teams, and will later provide SAP development and testing, and disaster recovery services for mid-market customers in financial services, retail, and pharmaceutical industries.
    Use Case 3: Financial Services Leader Partners with NetApp for Major Cloud Initiative
    In the heavily regulated financial services industry, the journey to cloud must be orchestrated to address security, data privacy, and compliance. A leading Australian company recognized that cloud would enable new business opportunities and convert capital expenditures to monthly operating costs. However, with nine million customers, the company must know exactly where its data is stored. Using native cloud storage is not an option for certain data, and regulations require that the company maintain a tertiary copy of data and retain the ability to restore data under any circumstances. The company also needed to vacate one of its disaster-recovery data centers by the end of 2014.
    To address these requirements, the company opted for NetApp Private Storage for Cloud. The firm placed NetApp storage systems in two separate locations: an Equinix cloud access facility and a Global Switch colocation facility both located in Sydney. This satisfies the requirement for three copies of critical data and allows them to take advantage of AWS EC2 compute instances as needed, with the option to use Microsoft Azure or IBM SoftLayer as an alternative to AWS without migrating data. For performance, the company extended its corporate network to the two facilities.
    The firm vacated the data center on schedule, a multimillion-dollar cost avoidance. Cloud services are being rolled out in three phases. In the first phase, NPS will provide disaster recovery for the company's 12,000 virtual desktops. In phase two, NPS will provide disaster recover for enterprise-wide applications. In the final phase, the company will move all enterprise applications to NPS and AWS. NPS gives the company a proven methodology for moving production workloads to the cloud, enabling it to offer new services faster. Because the on-premises storage is the same as the cloud storage, making application architecture changes will also be faster and easier than it would be with other options. Read the success story to learn more.
    NetApp on NetApp: nCloud
    When NetApp IT needed to provide cloud services to its internal customers, the team naturally turned to NetApp hybrid cloud solutions, with a Data Fabric joining the pieces. The result is nCloud, a self-service portal that gives NetApp employees fast access to hybrid cloud resources. nCloud is architected using NetApp Private Storage for AWS, FlexPod®, clustered Data ONTAP and other NetApp technologies. NetApp IT has documented details of its efforts to help other companies on the path to hybrid cloud. Check out the following links to lean more:
    Hybrid Cloud: Changing How We Deliver IT Services [blog and video]
    NetApp IT Approach to NetApp Private Storage and Amazon Web Services in Enterprise IT Environment [white paper]
    NetApp Reaches New Heights with Cloud [infographic]
    Cloud Decision Framework [slideshare]
    Hybrid Cloud Decision Framework [infographic]
    See other NetApp on NetApp resources.
    Data Fabric: NetApp Services for Hybrid Cloud
    As the examples in this article demonstrate, NetApp is developing solutions to help organizations of all sizes move beyond cloud silos and unlock the power of hybrid cloud. A Data Fabric enabled by NetApp helps you more easily move and manage data in and near the cloud; it's the common thread that makes the uses cases in this article possible. Read Realize the Full Potential of Cloud with the Data Fabric to learn more about the Data Fabric and the NetApp technologies that make it possible.
    Richard Treadway is responsible for NetApp Hybrid Cloud solutions including SteelStore, Cloud ONTAP, NetApp Private Storage, StorageGRID Webscale, and OnCommand Insight. He has held executive roles in marketing and engineering at KnowNow, AvantGo, and BEA Systems, where he led efforts in developing the BEA WebLogic Portal.
    Tom Shields leads the Cloud Service Provider Solution Marketing group at NetApp, working with alliance partners and open source communities to design integrated solution stacks for CSPs. Tom designed and launched the marketing elements of the storage industry's first Cloud Service Provider Partner Program—growing it to 275 partners with a portfolio of more than 400 NetApp-based services.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    Dave:
    "David Scarani" <[email protected]> wrote in message
    news:3ecfc046$[email protected]..
    >
    I was looking for some real world "Best Practices" of deploying J2EEapplications
    into a Production Weblogic Environment.
    We are new at deploying applications to J2EE application servers and arecurrently
    debating 2 methods.
    1) Store all configuration (application as well as Domain configuration)in properties
    files and Use Ant to rebuild the domain everytime the application isdeployed.
    I am just a WLS engineer, not a customer, so my opinions have in some
    regards little relative weight. However I think you'll get more mileage out
    of the fact that once you have created your config.xml, checking it into src
    control, versioning it. I would imagine that application changes are more
    frequent than server/domain configuration so it seems a little heavy weight
    to regenerate the entire configuration everytime an application is
    deployed/redeployed. Either way you should check out the wlconfig ant task.
    Cheers
    mbg
    2) Have a production domain built one time, configured as required andalways
    up and available, then use Ant to deploy only the J2EE application intothe existing,
    running production domain.
    I would be interested in hearing how people are doing this in theirproduction
    environments and any pros and cons of one way over the other.
    Thanks.
    Dave Scarani

  • GeoRaster in the Enterprise - tips?

    Next week I will start a pilot project loading mid to high resolution (0.5M to 0.1M) orthophotos for about 200 sites around 13000 hectares (50 Sq Mi) each. The sites are for the most part not contiguous, but the data is global. The source format is mostly MrSID, although for about 20% of the sites, Tiff or PNG formats are available, with a few ecw's as well. I plan to unproject and store the photo data in 8307 (lat/lon) format, as that is what our application uses for our native format, and to allow global coverage.
    The data will be used in three ways:
    First, it will be retrieved in defined tile sizes and scales for a home-gown Java rich-client application so that tiles can be saved to the client's disk to minimize repetitive bandwidth requirements. For areas not covered by these orthos, MS VE tiles will be used.
    Second, it will be exposed for others to use as a WMS service.
    Third, as a source for "export" into other formats on an on-demand process.
    We are using 64-bit Oracle 10gR2 on Solaris x86 with NetApp filers on the db end, and 64-bit Windows servers on the app end, all on a GBE network. I have both FME 2007 and Manifold v8 available for data conversion functions. For data serving, GeoServer and FME (for option three) are planned as we already use GeoServer successfully for WFS duties.
    I am looking for suggestions / tips from anyone on how best to load and store the imagery, and then optimize the retrieval of it. I've read the GeoRaster user guide (and starting reading the 11g one) and the book.
    I am particularity interested in what to do when you do have overlapping images (two sites that overlap). Also, how to best load a site when the site is not one image, but made up of tiles.
    For software, say, if Mapviewer has proven faster for WMS than GeoServer for you, I'm willing to listen.
    Thanks in advance,
    Bryan

    1. yes, (256,256,1) or (256,256,3) shall work fine. in this case, your focus is on displaying or retrieval, it's better to make sure (1) the images are stored using the same block size, ie, (256,256,1) or (256,256,3) and (2) when you retrieve the tiles control your query window to cover exactly the area of the tiles (ie, avoiding internal mosaicking and cropping).
    2. yes, in a regular tablespace, if you have a LOT of image data and concurrent multi-users, the more data files the better. you can make tens of them. when dealing with geoimages, ten data files is probably minimum. but you might also want to consider Oracle BIG TABLESPACE, in which only one data file (could be in terabytes) is allowed. we didn't do much tests on it. but seems the performance is very good.
    3. i think it all depends. seems your major purpose is to quickly retrieve and display the images, pan thru and zoom in/out smoothly across many images and around the globe. if so, i agree you'd better do some preprocessing so that the data have the same resolution and pyramids. to do this, scaleCopy can be considered.
    hope this helps a bit.
    jeffrey

  • Ask the Expert: Single-Site and Multisite FlexPod Infrastructure

    With Haseeb Niazi and Chris O'Brien 
    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Single-Site and Multisite FlexPod Infrastructure with experts Haseeb Niazi and Chris O'Brien.
    This is a continuation of the live webcast.
    FlexPod is a predesigned and prevalidated base data center configuration built on Cisco Unified Computing System, Cisco Nexus data center switches, NetApp FAS storage components, and a number of software infrastructure options supporting a range of IT initiatives. FlexPod is the result of deep technology collaboration between Cisco and NetApp, leading to the creation of an integrated, tested, and validated data center platform that has been thoroughly documented in a best practices design guide. In many cases, the availability of Cisco Validated Design guides has reduced the time to deployment of mission-critical applications by 30 percent.
    The FlexPod portfolio includes a number of validated design options that can be deployed in a single site to support both physical and virtual workloads or across metro sites for supporting high availability and disaster avoidance. This session covers various design options available to customers and partners, including the latest MetroCluster FlexPod design to support a VMware Metro Storage Cluster (vMSC) configuration.
    Haseeb Niazi is a technical marketing engineer in the Data Center Group specializing in security and data center technologies. His areas of expertise also include VPN and security, the Cisco Nexus product line, and FlexPod. Prior to joining the Data Center Group, he worked as a technical leader in the Solution Development Unit and as a solutions architect in Advanced Services. Haseeb holds a master of science degree in computer engineering from the University of Southern California. He’s CCIE certified (number 7848) and has 14 years of industry experience.   
    Chris O'Brien is a technical marketing manager with Cisco’s Computing Systems Product Group.  He is currently focused on developing infrastructure best practices and solutions that are designed, tested, and documented to facilitate and improve customer deployments. Previously, O'Brien was an application developer and has worked in the IT industry for more than 20 years.
    Remember to use the rating system to let Haseeb and Chris know if you have received an adequate response. 
    Because of the volume expected during this event, Haseeb and Chris might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, subcommunity Unified Computing shortly after the event. This event lasts through September 27, 2013. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.
    Webcast related links:
    Single-Site and Multisite FlexPod Infrastructure - Slides from live webcast
    Single-Site and Multisite FlexPod Infrastructure: FAQ from live webcast
    Single-Site and Multisite FlexPod Infrastructure - Video from live webcast

    I would suggest you read this white paper which details the pros and cons of direct connect storage. 
    http://www.cisco.com/en/US/partner/prod/collateral/ps10265/ps10276/whitepaper_c11-702584.html   This paper captures all the major design points for Ethernet and FC  protocols.
    I would only add that in FlexPod we are trying to create a highly  available solution and "flexible" solution; Nexus switching helps us  deliver on both with vPC and unified ports.
    NPV equats  to end-host mode which allows the system to present all of the servers  as N ports to the external fabric.  In this mode, the vHBAs are pinned  to the egress interfaces of the fabric interconnects.  This pinning  removes the potential of loops in the SAN fabric.  Host based multipathing of the  vHBAs account for potential uplink failures.  The NPV mode (end-host  mode) simplifies the attachment of UCS into the SAN fabric and that is  why it is in NPV mode by default.
    So for your last question, I will have to put my  Product Manager hat on so bear with me.   First off there is no drawback  to enabling the NPIV feature (none that I am aware of) the Nexus 5000  platform simply offers you a choice to design and support multiple FC  initiators (N-Ports) per F-Port via NPIV.  This allows for the  integration of the FI end-host mode described above.  I  imagine being a  unfied access layer switch, the Nexus team enabled standard Fibre  Channel switching capability and features first.  The implementatin of  NPIV is a customer choice based on their specific access layer  requirements.
    /Chris

  • How to speed up Network Scans?

    Howdy,We are just starting to use the Spiceworks Help Desk so I figure dI might as well take a look at the Network Scanning functionality as well. I went in and setup my various IP ranges and kicked off a scan. The scan seems to be working but seems like it's going to take forever. I'm wondering what the best way might be to try to speed things up.Spiceworks is installed on a VM running Server 2012 R2. 2 Cores, Dynamic memory, and it's running on clustered Netapp FAS drives.We have 200-250 various workstations (laptops, surfaces, etc)
    We have another couple hundred servers (mostly VMs)
    We also have 2 people who work the Help Desk tickets all the time and another 4-6 that work them if the first level people need help
    I see there are a lot of various scan types that can be scheduled but I don't know which ones are or aren't resource hungry....
    This topic first appeared in the Spiceworks Community

    Hi  John,
    Please take a look at the reply from Wyck in the following thread.
    How to speed up for loop?
    His answer is very comprehensive.   Thanks.
    Best regards,
    kristin
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • SME 7.1 not working with Exchange 2013 SP1 CU6

    Hello, i tried upgrading to SME 7.1 on our Exchange 2013 SP1 CU6, but when trying to configure the Exchange Server in the SME management console using the configuration wizard, it gave an error before the LUN mapping page, saying "Failed to get Exchange information for server" and "The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state.". I upgraded Snapdrive to 7.1, verfied all credentials according to the manuals, re-installed and rebooted multiple times (registry and filesystem cleaned, even tried to rename the existing SnapInfo directory, ...), stopped all other services on the server but without any change... Every time when trying to run the configuration wizard, the debug log shows no errors, lists all Exchange information and ends with the following lines:[11/18 12:51:58 155] Exit CSmeCDOExm::GetExchangeStructureInternal 0x0
    [11/18 12:51:58 155] Exit CNtapSnapMgr::GetExchangeStructure: 0x0
    [11/18 12:51:58 233] Enter CSmeServerBase::~CSmeServerBase ...
    [11/18 12:51:58 248] m_ptrSnapShot released
    [11/18 12:51:58 248] m_pISnapShot4 released
    [11/18 12:51:58 248] m_pISnapShot5 released
    [11/18 12:51:58 248] m_pISnapShot6 released
    [11/18 12:51:58 248] m_pISnapShot7 released
    [11/18 12:51:58 248] m_pISnapShot8 released
    [11/18 12:51:58 248] m_pISnapShot9 released I ended up uninstalling SME 7.1 and re-installed SME 7.0 which is working. Exchange Server: ESXi 5.5 VM, Windows Server 2012 R2 (with current patches), Exchange 2013 SP1 CU6 - no DAG, single server installation (in co-existence with Exchange 2010), 4 iSCSI LUNs created with Snapdrive (1x Log, 3x Store), System drive and Exchange installation drive on vmdk Disks (via ESXi -> NFS store on the same NetApp)NetApp: FAS 2240-2 with Data ONTAP 8.1.4P1 7-mode (with complete license bundle) Anyone else got this problem or even better - a solution? 

    We could not reproduce the error with the upgrade workflow.Looking at the error and the relevant function, we could force the problem in following conditions: -SME is configured or ran at the DAG level-one of the following services is stopped or not responding on one nodes of the DAG Snapdrive Service, Snapmanager Service, Snapdrive Storage Management Service,Snapdrive Management Service. Would be good if we could get the output of following powershell cmdlet before and after the upgrade: Get-Service –ComputerName Node1, Node2, Node3 –name Snap*, SDMGMT*, SWSvc  Best regards,Dan 

  • Trust state in Nexus 7000

    Hi  According to the doc, by default, N7K trust dscp and CoS and preserve the value at ingress.  A packet traverse N5K which is trunked to N7K. The packet contains both DSCP and COS value. In such scenario,   (1) Will the N7K just use the COS value in the packet to perform egress queueing? Or it will use the DSCP value, then check against the default dscp-cos mapping table, and use that COS value to perform egress queueing?  The reason i asked is because if i have a scenario the packet consists of COS=4, DSCP=24, egress queuing (based on CoS) will be different. If N7K simply uses COS=4, then it will be mapped to Queue4 for example. If N7K uses DSCP=24, dscp-cos mapping maps to COS=3, then egress queuing will be maped to Queue3.  Any idea?  Eng Wee

    Hi Andy,
    I'm struggeling with QoS on a N7K (mixed F1 and M1 cards) and stumbled across your reply.
    Assume that you've meant COS 0,1,2,3,4,5,6,7 in the rigth coloumn.
    I need to classify, mark and queue IP-storage traffic (NFS and iSCSI) from a NetApp FAS, which cannot mark trafic (I've been told be NetApp).
    Therefore I need to mark ("remark") the traffic to AF21/COS2 on the 802.1Q trunk (actually running as a VPC), from/towards the NetAPP, which is connected to an F132XP-15 port.
    According to the manual I cannot do COS marking in ingress, but only DSCP.
    My config :
    ip access-list NetApp-storage
      10 permit ip 10.xxx.yyy.0/24 any    <-- used for NFS
      20 permit ip 10.xxx.zzz.0/24 any    <-- used for iSCSI
    class-map type qos match-all IPstorage-IN
      match access-group name NetApp-storage
    policy-map type qos NetApp-IN
      class IPstorage-IN
        set dscp 18
      class class-default
        set dscp 0
    int po xxx                                     <--- VPC towards NetApp FAS
      service-policy type qos input NetApp-IN
    So based on your comments the "storage" traffic will exit the switch with COS=0 - correct ?
    In order to set the COS properly upon egress, I would need to additional configure :
    policy-map type qos NetApp-OUT
      class IPstorage-IN
        set cos 2
      class class-default
        set Cos 0
    int eth xx/yy or port-channel zzz              <--- ports towards "storage users"
      service-policy type qos output NetApp-OUT
    on all ports ???
    So based on your comments the "storage" traffic will exit the switch with COS=0 - correct ?
    In order to set the COS properly upon egress, I would need to additional configure :
    policy-map type qos NetApp-OUT
      class IPstorage-IN
        set cos 2
      class class-default
        set Cos 0
    int eth xx/yy or port-channel zzz              <--- ports towards "storage users"
      service-policy type qos output NetApp-OUT
    on all ports ???
    Additional queueing questions :
    Will the egress queing be done correctly, by only setting the DSCP upon ingress ?
    My config :
    qos copy policy type queuing default-4q-8e-out-policy prefix QQ_
    policy-map type queuing QQ_4q-8e-out
      class type queuing 1p3q1t-8e-out-pq1
        priority level 1
      class type queuing 1p3q1t-8e-out-q2
        bandwidth remaining percent 1
      class type queuing 1p3q1t-8e-out-q3    <--- COS 2 should go here
        bandwidth remaining percent 49
      class type queuing 1p3q1t-8e-out-q-default
        bandwidth remaining percent 50
    int eth xx/yy or port-channel zzz              <--- ports towards "storage users"
      service-policy type queuing output QQ_4q-8e-out
      service-policy type qos output NetApp-OUT
    Is this correct ?
    Best Regards
    Finn Poulsen

  • Split Brain Thinking

    We are designing a RAC environment for our ERP system and have run into an interesting problem. First, I'll give you the design, then ask the question that we are puzzled over.
    We have two campuses. There will be 4 nodes in the cluster, 2 per campus. We have two NetApp FAS 3070's, one per campus. Each NetApp will export voting/ocr disk, an ASM disk, and redo/archivelog locations.
    Each node will use ASM's normal redundancy for fault tolerance. The nodes will see a single disk for ASM from campus A, and a single storage disk for ASM from campus B. ASM will mirror the data between the two campuses.
    Each node will use normal redundancy for the voting disk and OCR.
    The thought here is that we could lose an entire datacenter--2 nodes, a FAS3070, server switch--everything--and still be up on the other campus. However, here's the question: What happens when the inter-campus network links fail? In this situation, nodes 1 and 2 on campus A can still see all storage, voting disk, and ocr on its campus from its 3070, but they cannot see nodes 3 and 4, or the storage on the other campus. Nodes 3 and 4 can still see all of its storage, voting disk, and ocr on campus B, but they cannot see nodes 1 and 2 or the storage on campus A. Basically, each set of nodes (1 and 2 on campus A, and 3 and 4 on campus B) can operate independently from the other campus' nodes. Does anyone know how clusterware would handle such a situation?
    Thanks for any input!

    Here is what will be the final production configuration:
    Definitions:
    Campus A (NetApp A), and Campus B (NetApp B) will be the two sites.
    We have 4 filers, two on each campus. There exists mirrored and unmirrored storage (so that we can do failovers in the case of a campus/site outage). We will be using unmirrored storage as to not duplicate data 4 times. The disks will be configured on one NetApp from each campus to use the RAID-DP feature provided by the filer, but the other campus will not do NetApp/Filer level mirroring. All mirroring will be done by Oracle's ASM. Basically, when I say "mirrored" or "unmirrored", I'm referring to filer-level mirroring, not ASM-level or Oracle "normal redundancy" mirroring. Here's how the exports look:
    NA-A will export one 500GB volume for datafile storage, unmirrored.
    NA-A will export one 500GB volume for archivelog storage, unmirrored.
    NA-A will export one 50GB volume for redo log storage, unmirrored.
    NA-A will export one 1GB volume for OCR and voting disk storage, unmirrored
    NA-A will export one 1GB volume for voting disk storage, MIRRORED. This is so that if an entire site fails, and that site contains the majority of the voting disks, we can failover the mirrored voting disk volume to the other campus and bring that side up. This allows for the smallest amount of downtime.
    NA-B will export one 500GB volume for datafile storage, unmirrored.
    NA-B will export one 500GB volume for archivelog storage, unmirrored.
    NA-B will export one 50GB volume for redo log storage, unmirrored.
    NA-B will export 1GB volume for OCR and voting disk storage, unmirrored.
    $ORACLE_HOME will be local to the node. We want to be able to do rolling upgrades.
    Each node will have a public, private, and storage interface, all on different VLANs.
    ASM will mirror the two 500GB volumes, one from each side. Oracle will use internal mechanisms to mirror the OCR disks and Voting Disks. At this point, we will remain completely online should the campus with the single voting disk fail, and be offline for a very short amount of time if the campus with two voting disks fail. Downtime length will be the time it takes for a SysAdmin to login to the filer and take over the other filer's disks. This happens very quickly, typically less than a minute.
    Any other questions, feel free to ask.

  • Multiple vsan traffic over single port-channel

    Hi -
    Scenario - 2 interface uplink (port-channel - Po10 ) from NetApp FAS-A to N5548-A & B. Po10 is currently configured with vPC10 and vFC10 at N5k end. single vfc currently mapped with a single vsan (vfc10 with vsan 1011).
    Q - Is it possible to make the Port-channel to pass multiple vsan (vsan 1011 & 1012). If yes, then how (over same vfc or by separate vfc on same port-channel)
    Subhankar      

    This router’s capability is only limited and dependent on the services that your ISP has given or allowed for you to use. I think it really has to be a one is to one configuration, not only with this router because I haven’t noticed any router that has this feature so far. This is really another idea for Linksys can work on.

  • Latency pinging VM to VM.

    When pinging VM to VM there is a high latency 100ms/150ms/200ms.
    Physical NICS are teamed and vendor is Broadcom with latest driver.
    Vm's are 2008R2/2008 with some applications installed.
    Integration services are installed on all VM's.
    Two hyper V hosts are win 2012 R2 with latest patches.
    Servers are HP Proliant Dl380p.
    Suggestions and recommendations will be highly appreciated.

    Hi Lai,
    My setup is Windows Server 2008 R2 Datacenter Edition two servers HP Dl380 P
    Storage NetApp FAS 2240
    I have installed the OS, Latest patches and all the required services on both servers.
    Cluster is created and validated reports are good.
    Test VM is created in Cluster and tested live migration / quick migration and moved to other node as well in the cluster.
    Now I have mapped a RAW LUN from storage directly to the VM and this LUN is offline and not formatted at HOSTs levels.
    I have added this LUN is Cluster Storage as well.
    Now when migrating the VM it throws an error like migration attamept failed.
    My question do i have to make that LUN CSV inorder to migrate that VM to another Node.
    Please help suggestion and recommendation highly appreciated.
    Thanks..!!!

Maybe you are looking for