I'm looking for Failover/High available solutions for IPS 4200 Series

Hi all,
I tried to find out Failover/High available solutions for IPS 4200 series,I didn't saw failover solutions in IPS guide document. Anybody can be help me!

I do not know if this is documented anywhere, but I can tell you what I do. As long as the IPS 4200 has power, with the right software settings, the unit can fail such that it will pass traffic. Should the unit loose power, it does stop all traffic. I run a patch cable in parallel with the in line IPS unit, in the same VLAN, with a higher STP cost. Thus all traffic will traverse the IPS unit when possible, but should something happen to it, a $10 patch cable takes over.
Mike

Similar Messages

  • How can I achieve high available solution for directory server

     

    You can start with deploying multi master replication which will give you 2 servers available for writes (and as many read-only consumers as you want).
    You can also install Directory Server in a Cluster (using Sun Cluster) which will provide more failover capabilities.
    If you combine both, you should be able to have almost no downtime.
    You can also use the Directory Proxy Server (aka iDAR) to provide transparent failover for client applications.
    I hope this help.
    Regards,
    Ludovic.

  • SQL Server Analysis Services (SSAS) 2012 High Availability Solution in Azure VM

    I have been testing an AlwaysOn high availability failover solution in SQL Server Enterprise on an Azure VM, and this works pretty well as a failover for SQL Server Databases, but I also need a high availability solution for SQL Server
    Analysis Server, and so far I haven't found a way to do this.  I can load balance it between two machines, but this isn't working as a failover and because of the restriction of not being able to use shared storage in a Failover Cluster in Azure
    VM's, I can't set it up as a cluster which is required for AlwaysOn in Analysis Services. 
    Anyone else found a solution to use an AlwaysOn High Availability for SQL Analysis Services in Azure VM?  As my databases are read-only, I would be satisfied with even just a solution that would sync the OLAP databases and switch
    the data connection to the same server as the SQL databases.
    Thanks!
    Bill

    Bill,
    So, what you need is a model like SQL Server failover cluster instances. (before sql server 2012)
    In SQL Server 2012, AlwaysOn replaces SQL Server failover cluster, and it has been seperated to
    AlwaysOn Failover Cluster Instances (SQL Server) and
    AlwaysOn Availability Groups (SQL Server).
    Since your requirement is not in database level, I think the best option is to use AlwaysOn Failover Cluster Instances (SQL Server).
    As part of the SQL Server AlwaysOn offering, AlwaysOn Failover Cluster Instances leverages Windows Server Failover Clustering (WSFC) functionality to provide local high availability through redundancy at the server-instance level—a
    failover cluster instance (FCI). An FCI is a single instance of SQL Server that is installed across Windows Server Failover Clustering (WSFC) nodes and, possibly, across multiple subnets. On the network, an FCI appears to be an instance of SQL
    Server running on a single computer, but the FCI provides failover from one WSFC node to another if the current node becomes unavailable.
    It is similar to SQL Server failover cluster in SQL 2008 R2 and before.
    Please refer to these references:
    Failover Clustering in Analysis Services
    Installing a SQL Server 2008 R2 Failover Cluster
    Iric Wen
    TechNet Community Support

  • IPS High Availability Solution

    Hi all,
    requirement to have redundancy for IPS appliance placed on data center design, I have digged on Cisco docs but found the Resiliency and HA (High Availability) from the IPS point of view could occur in the switches side (HSRP/Eth channel load-balance).
    is there any visible way to implement the High Availability in dynamic way !!
    Regards,
    Belal

    Belal
    You are correct, only one sensor at a time will pass traffic.
    Spanning Tree Protocol uses layer 2 frames called BPDUs to determine if a path to the root bridge (in this case VLAN) exists. If the primary sensor stops passing layer 2 frames (a good indication that the rest of your traffic is not going to get through the sensor) then BPDUs will not pass thru the primary sensor and Spanning Tree will unblock the secondary path through the standby sensor. You may want to watch for an SNMP trap from the switch to know when that happens.
    The failover cable is just an ordinary roll over cable between two ports (in the two VLANS) on the switch. I called it a failover cable because it only carries traffic when the sensor has failed to pass layer two (and above) frames.

  • Good/robust/high performance solution for calling Java from C

    Hi,
    I am looking for a good/robust/high performance solution for invoking a local java component from a C application in Windows.
    About Java component:
    - It has multiple methods that would be called very frequently from C application
    - It will reside in local machine. This machine may not have access to internet/LAN.
    My preference is to implement this java component as indepent running exe and have C application make outprocess calls. Is this possible?
    Any help/suggestions on this are greatly appreciated
    Thanks,
    -Arun

    I will stop replying to this conversation, unless it
    remains professional and without getting personal.I suggest you read your replies again then since you were the one that strayed from the topic and referred to my competency.
    >
    Or conversely those that have been working with JNIfor close to 10 years >might have a different opinion
    that someone who is promoting a commercial >product?
    I cannot see anything wrong with sharing and letting
    people know of potential solutions that might help to
    solve their problems.Not a problem at all.
    You however stated an absolute. I asked you to back up that absolute with something that was objective.
    You then lashed out at me and questioned my competency.
    >
    So you are questioning my competence as well as myintent?
    And not bothering to address the question at all.
    I am guessing you haven't actually attempted tocompare JNI to on board >sockets but instead you are
    just guessing.
    Frankly, I don't see how a socket solution with
    process switching overhead is going to work faster
    than a direct method call, passing parameters over
    the stack, the whole operation taking place
    in-process.
    And thus, as I thought, you haven't actually attempted to compare the two.
    And in terms of competency I can only wonder if youare competent enough
    to write the test frameworks that would demonstrateone or the other.
    As said earlier, there is no need to get personal,
    let's keep professional!
    And as I said earlier I would suggest that you look to your own replies.
    As mentioned, I do have enough experience for those
    who need to interoperate with Java using JNI, and can
    show how to do it in a safe and productive manner.
    And as I have said for the third time now that has nothing to do with what I actually asked you.
    The code that someone writes via JNI has much lessusage, and thus is much >more likely to have
    problems. Even if someone is competent in creating
    JNI >and C code.
    Thus JNI Is less stable.
    And by the way that includes the commercial productthat you are promoting >as well.
    Expressing an opinion on a product just based on its
    underlying technology without proper evaluation
    doesn't sound like constructive criticism.
    So you are claiming that your product is in fact more stable than the VM or at least as stable even though it has had far, far less usage?
    The only thing that can be said about the product so
    far, it that a product that is in production for some
    years without any problems, has a certain level of
    reliability and quality, whatever the underlying
    technology might be, and that there is enough
    competency with JNI to produce it and advice about
    the technology.
    True. But you were the one that brought up unrelated issues and then, as best as I can tell, thought to challenge my knowledge by pointing out how VMs "really" work.

  • High Availability Solution (TAF)

    Hi
    There is a requirement to setup an environment that provides high availability. 2 nodes are provided for the same purpose. System downtime is not acceptable. Following are the details.
    OS :Windows Server 2003 Enterprise Edition
    DB : Oracle 10g Rel. 2
    Storage : NAS
    App : Oracle Forms 6i (Client Server) & ASPs
    Solutions were to implement RAC or Oracle FailSafe but I don't think they are supported on NAS. Also with Oracle FailSafe there will be downtime for the time failover happens. SAN cannot be arranged and the requirement is immediate.
    Any help will be appreciated.
    Best Regards
    Abbas

    Thanks forbrich
    Do you know any specific doc that describes the installation and configuration steps of 10g RAC on NAS? If possible, can you provide some link that I could use to perform this task?
    I have done RAC installations on SAN without any problems and its something I'm fairly experienced with. With NAS I am not really comfortable since I can't seem to find any documentation that describes step by step installation procedure or guidelines for that matter.
    Thank you for your input
    Best Regards
    Abbas

  • High Availabilty Solutions for Oracle R12.

    Hi Gurus,
    We are looking for a HA solution for our R12 E-Business Suite. Currently we have options like HP Service Guard from HP.
    Kindly provide me your valuable suggestion for HA solution from Oracle side.
    Please kindly provide me the details as well like 'Licensing, Advantages, Disadvantages, pre-requistes, ease of Adminstation etc.
    Highly appreciating your replies.
    Thanks.

    Thanks for your replies. Does Oracle RAC need extra licensing?Absolutely, yes. RAC is neither cheap nor free. ;-)
    I would like to have enough information before i can go to our sales representative.
    Actually i am planning to give presentation on High Avaialability options for Oracle R12.
    and i assume i will be asked for the lisensing. i need your helpA conversation with your Oracle Sales representative will be far more informative than asking questions about licensing on these forums. If you want baseline information about license costs, you can check the Oracle Store (https://oraclestore.oracle.com), but you may be able to negotiate a better rate with Oracle Sales.
    Update: In other words, "what Hussein said." I really need to refresh these threads more often when composing a reply, especially if I wander away and come back. ;-)
    Regards,
    John P.
    http://only4left.jpiwowar.com
    Edited by: jpiwowar on Jun 1, 2010 10:40 PM

  • Detail on High Availability options for Web Apps

    Hi,
    I do really struggle to locate actual information on Azure Availability offerings / capabilities...as an Infrastructure Architect it has bugged me for years now with Azure offerings!
    Something that simply states the availability within any local DC and options for true HA over 2 or more DC's.
    We are moving away from using Web Roles to Web Apps for solutions for our clients. I understand the principles of fault domains, etc. with Web Role requirements for min. of 2 to (mostly) avoid MS update disruption within a single DataCenter, but cannot locate
    similar info. with regard to Web Apps.
    Really appreciate if someone could point me to some appropriate detail as I've failed....
    (Also, cannot find anything on DocumentDB....)
    Many Thanks,
    Lee

    Hi,
    High Availability of a running service always comes with a cost, and priorities will be app-specific. If it's the web tier, then you may indeed want to consider hosting in multiple geo's. If it's a backend processing tier, sometimes it's "ok" to
    let a service go offline, as long as requests are queued up. If it's the storage system (preventing queueing of messages), perhaps an alternate queue in a different data center could be available for redundancy purposes.
    I would request you to check this article:
    https://msdn.microsoft.com/en-us/library/azure/dn251004.aspx
    Hope this information helps.
    Regards,
    Azam khan

  • Looking for SharePoint Project Management Solution for Office 365 - Risk,Issues,budget

    Hi All,
         I'm looking for a Project Management solution plugin like PM Central below for office 365 cloud
    http://store.bamboosolutions.com/BambooMainWeb/pmc-tryit.aspx
    Looking to get more resources and links
    Thanks in advance

    Hi  Patrick,
    There are many project management apps for SharePoint Online. Such as  Project Management, EPM Pulse for Project Online  and so on.
    Here is the office store URL , you can choose an appropriate app  and add it into your SharePoint Online:
    http://office.microsoft.com/en-us/store/results.aspx?qu=project&avg=osu150&vtags=Project%20Management
    Best Regards,
    Eric
    Eric Tao
    TechNet Community Support

  • High Availibity solution for office web app server 2013 for Sharepoint 2013

    I have a scenario with the three nodes SP1 , SP2 and DR-SP with server 2012 standard, each running SharePoint 2013 enterprise, All of three nodes are members of single SharePoint farm that spans two data centers.Primary Data Center have two nodes SP1 , SP2
    and DR data center have one node DR-SP of share point 2013.
    For Office webapp server i have two nodes OWA1, and DR-OWA with server 2012 standard, that spans between two data centers.Primary Data Center have OWA1 and DR data center have one node DR-OWA of office web app server.
    Currently i have configured office web app on  primary data center node OWA .How i can enable high availability of office web app server in case of primary data center outage.Please guide.

    WAC servers on a single farm must be in the same data center. The WAC configuration you've proposed is unsupported. Build a separate WAC farm in the secondary data center and attach it to the SharePoint farm only in the event of a disaster.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • High Availibity solution for Sharepoint Service Applications

    I have a scenario with the three nodes SP1 , SP2 and DR-SP with server 2012 standard, each running SharePoint 2013 enterprise, All of three nodes are members of single SharePoint farm that spans two data centers.Primary Data Center have two nodes SP1 , SP2
    and DR data center have one node DR-SP of share point 2013.
    For MS SQL i have three nodes DB1, DB2 and DR-DB3 with server 2012 standard, each running an instance of SQL Server 2012 enterprise, participate in a
    single Windows Server Fail-over Cluster (WSFC) that spans two data centers.Primary Data Center have two nodes DB1 ,DB2 and DR data center have one node DR-DB of SQL Server 2012 enterprise.
    Currently i have configured these service applications on  primary data center nodes SP1 .How i can enable high availability for different service Applications  of SharePoint 2013  like Profile service application, search service application,
    managed meta data application in case of primary data center outage.

    stretched farms are not supported unless the WAN provides sub-millisecond ping times.
    Scott Brickey
    MCTS, MCPD, MCITP
    www.sbrickey.com
    Strategic Data Systems - for all your SharePoint needs

  • Generating license for ISE high availability primary/secondary nodes

    We have two ISE servers that will act as primary/secondary in a high availability setup.
    The ISE 1.0.4 installation guide, page 93, mentions that "If you have two Cisco ISE nodes configured for high availability, then you must include both the primary and secondary Administration ISE node hardware and IDs in the license file."
    However, after entering the PAK in the licensing page, the only required fields are:
    - Primary Product ID
    - Primary Version ID
    - Primary Serial No
    In this case, how can i include both primary and secondry HW and IDs?
    Thanks in advance.

    I am refering you a Cisco ISE Nodes for High Availability configuration guide, Please check:
    http://www.cisco.com/en/US/docs/security/ise/1.0/user_guide/ise10_dis_deploy.html#wp1128454

  • Windows NLB for SPF high availability

    I have SPF in my organization as a bridge between VMM and WAP.  One of my design tenants is to configure everything in a high availability configuration and I am having trouble getting SPF to work in an HA environment.
    If I set the binding on SPFs website to the NLB name and use the certificate that I have for it, WAP is unable to write anything to the SPF database.  If I set SPF back to no binding and use the certificate that is created for the machine itself, WAP
    works exactly as it should.
    The NLB that I am using is windows NLB and I have it configured in unicast /w all ports being forwarded to the SPF server.  I have just one NIC per server on my SPF instance. 
    I understand that SPF is supported and have configured it using this guide... (link: http://www.hyper-v.nu/archives/mvaneijk/2013/06/system-center-2012-sp1-service-provider-foundation-high-availability/) It works from the server to the VMM server (using
    the web test as prescribed) but gets an error 12 that references the provider section of the site when I try to create the SPF connection from WAP.
    Does anybody have any ideas? 

    thank you for the prompt reply.  The certificates that I am using are created by my enterprise CA which is automatically trusted by my WAP servers.  The servers all get a client/server authentication certificate as part of AD's auto-enrollment. 
    I have created a custom certificate with just the name of the NLB.  Do i need to specify the actual server names of the SPF environment as SAN's in the certificate request?  Could that be the problem? 
    Single server SPF uses a certificate that is trusted with the single server name
    NLB environment uses a custom request certificate that has client/server EKU and the only name that is registered is the DNS name of the NLB's IP address.  This is the same name that I put in the binding for the spf website on my SPF servers. 
    (this method is broken)

  • Advice for keeping high quality files for archiving

    I'm contemplating archiving here. I love reference files because they are pretty small in file size and don't compress. However, I have to worry that all my files will remain in the same exact place forever. I have found that is not always the case for whatever reason. Then there is self-contained reference files. A great idea...I don't have to worry about moved files but the file size is enormous. I do long videos (sometimes an hour or so) so file size is really important to me.
    Is there anyway I can do a self-contained reference file but somehow reduce the file size or something? I know I'm throwing a bone here but I figured I would ask because my current scenario is not improving daily. I know I can put my hour or whatever to tape but that essentially is compressing it and I'm using valuable time. Any advice?

    HI Lowsky, I been down this road some time back and my opinion is that if you are working in any workflows associated with CMFs (contemporary media format) in a tapeless workflow then storing your LONG TERM content as CMF's on spinning disk will eventually lead to misery and a hopeless state of anger and heartache. Spinning disk is not the long term answer.
    refer here for lots of detail: I have post this detail in another forum at
    http://www.reduser.net/forum/showthread.php?t=21284&page=6
    *Archiving FCP Media Manager Projects, essence, import/transfer material.. etc*
    Aside from many testimonials about DISK being faster (more accessible?) and convenient, storing content on spinning disk is a like as a child having all the toys you could imagine but only playing with your 3-4 favourites. (replace with any analogy). Simply you only need the stuff around on disk that you actually use a lot or currently.
    I (and I'm not alone) don't trust and disk storage system unless you buy enterprise level disk arrays with some fault tolerance and smart disk array controllers AND that you have at least multiple instances of the content. I qualify ENTERPRISE disk systems - these at $USD5.00-$USD10.00/GB+. I mean those disk arrays from I,T. vendors, and M&E guys like Isolon, DDN, Apple, HP, Copan and their 40+ ilk.
    The affordable stuff most of us is around $USD0.25-$USD0.75/GB - SATA and lowest end HBA's or simple controllers. Basically just spinning +power soaking heat dissipating buckets+ to put our content into. (USB | FW or the external JBOD disk enclosues with an HBA or higher FC disk arrays)
    Most of these are enterprise disk storage systems that are out of the price range of many on this forum...
    Devices like DROBO™ are cute but they are way way too slow (29MB/s when not busy consolidating itself ) and are very limited on their architecture for VIDEO work (useless). I gave mine away after a week to a photographer.
    Alternatives like many sets of external and detachable and mobile spinning disks, leaving them spun down until use offer a great odea however unless the disk controller is savvy, you may as well have a bank of el cheapo Le Cie 1TB death disks and get on your knees and face the moon when you power them up.. (Good = luck with that!) mode=:rant
    *Viable Archive Solutions for CMFs?*
    After looking for viable solutions to by content storage problem after moving to a tapeless workflow with my HVX200 and P2, AND losing on two separate occasions a whole 1TB external disk (Le Cie).. losing most of the data of two drives (2 x 1TB over 4 months).. I have had it with disks!
    For me, Disk storage should only be used like a +kitchen table+ for immediate and current and frequently accessed work. Like the +toy analogy+, hen not in use, put the toys away somewhere where they will not be damaged (the content objects) (like a large toy box or cupboard) that you can access with ease. (not the garden shed out the back nor the attic else it will never be used). This is workflow is simple enough..
    My first thought for managing FCP media manager archives, P2 footage and final composites was to use BD's.. however the cost for the content that was wanted to archive was not cost effective. I had literally 100's of old video tapes, DV , HDV and ye old video8 rubbish that I needed to move to a CMF, all accessible via proxies and metadata (a la CATDV™ a mam from squarebox.co.uk). The restriction was COST of BD or DVD-DL media and worse the time of write and then access the material ESSENCE when I needed it.
    I even looked for a while at many 100's of DVD-5's which in Hong Kong are as cheap as free beer coasters in a bar. I would buy them in canisters of 100's at a time. Simply these are cost effective using TOAST to make multivolumes self extracting archives, but are SOOOooooo slow to write and then access.
    So I turned to looking at the economies and workflows of using LTO4 Ultrium data tape drive. I was quite surprised at the reasonable cost.
    The cost of Ultrium LTO4 data tape media of to 800GB native for prices that are as cheap as $USD1.40/GB as pure media (the tape cartridge itself).It depend where you buy them and at what quantity. Of course there are the necessary offset/initial costs of the data tape drive AND the SAS HBA you will need for your mac. (LTO4 Ultrium @ $USD2700 and $USD150+ for SAS HBA). Much cheaper fo Ultrium LTO3 but lower data rates and capacities.
    NExt year HP et al will have Ultrium LTO5 out at 1.5TB RAW/native capacity as their roadmap suggests.
    THe uncompressed / RAW data rates of LTO4 Ultrium data tape drives for ARCHIVING material is for me between 105MB-109MB/sec (that's megabytes).. simply BLOWS any firewire 800 interface out of the water. There are many variables however essentially for people like us whose business is content creation or even the part timer who doesn't want to lose stuff, this is really worth a look. Again it will depend on the software you utilise. I use Tolisgroups BRU.
    Just do the sums on the time it takes you to archive (or copy to another disk system [BD or DVD-5, DVD 9, et or a spindle) lets say 4 hours of DVCPROHD 1080P (110Mbs or just over 1GB/minute). You can see that this is a lot of content and will take a great deal of time.
    This works out at roughly 240GB.
    I think this will take over 500 minutes (6 hours? @ 2min/GB)+ to COPY on a FW400 interface and slightly less on a FW800 interface.
    However it takes a mere 2200 second or 36 minutes using an LTO4 SAS Ultrium tape drive (with no tape mount/dismount timeloadlocate)!
    On a lesser scale, a look at DDS tapes for the low end. These are despite rumours/myths are quite stable and work well with the right software on the mac and the mac laptop.
    I am having great success with this archive and recall system using Tolisgroups BRU that is NATIVE for OSX.
    I have posted some very recent information on the two SAS PCIe HBAs I am using (ATTOTECH and LSI) and the two tape drives I am using (HP and QUantum).
    For LTO4 Ultrium SAS tape drives I would recommend only HP and ATTOTECH EXPRESSSAS as this is reliable and very stable and very fast for OSX.
    I am very happy with this archive/recall workflow using LTO4 Ultrium data tape for FCP.
    I have post this detail in another forum at
    http://www.reduser.net/forum/showthread.php?t=21284&page=6
    hth
    w
    HK

  • Hoping for a better docking solution for pen

    I was really close to buying a Thinkpad 10, but eventually got the old Thinkpad Tablet 2 instead. One of my primary mobile applications is making annotations on PDFs while reading them, so the pen is really important. At the time the whole device should be light enough to effortlessly carry it on the run. Although the new pen may be better than the good old "toothpick", it just does not fit into the device and may get lost easily. Besides I just hate when things are scattered in pieces. I have also looked for a hard case sleeve with a full size compartment for the pen but have not found any.
    So what I am hoping for is that the new product iteration will either have a version that can house the pen within the device or have a optional lightweight sleeve that can entirely house the pen. The current solution for Tablet and Helix is just clumsy at best.  

    Hi, Audrey,
    quote:
    Before that, though, does anyone have any suggestions for
    working around the CHM search limitations?
    If you can educate your users to add wildcards to their
    search terms, they can find all the words that you provided as
    examples in your message. For example, searching for
    auto* will find
    AutoComments,
    [Auto Release],
    [AutoCap],
    auto-run, and
    [Autocomment].
    See this article for more information:
    http://helpware.net/htmlhelp/hhfindingtext.htm
    Pete

Maybe you are looking for