SLD in high availability configuration

Hi All,
SLD question:
we're implementing a EP-KM NW 04 scenario without backend integration.
We'd like to run SLD inside the J2EE EP cluster environment.
Note 825116 explains how to use SLD in a cluster, but SLD http connection setup in service SLD Data Supplier is propagated between the servers.
So, it remains a SPOF unless the instance on wich SLD resides, stay on HW cluster, but this is not one of the  recommended configurations.
Wich is the solution: go through the LB mechanism for reaching  SLD via http, or make a specific server configuration if possible?
Regards,
Andrea Cocco

Post the WLC CLI output of "show ap config general "  command for one of your AP. If nothing is configured  for HA or HA configured value to be changed, try the below CLI commands for that AP & see what error message you got on CLI
(WLC ) >config  ap secondary-base
(WLC ) >config  ap primary-base
HTH
Rasika
**** Pls rate all useful responses ****

Similar Messages

  • SCOM 2012 R2 Reporting Services High Availability Configuration Setting

    Can u guys tell me how I can make a scom 2012 r2 report server high availability configuration kindly share the steps or screen of it.

    SCOM reporting feature is depend on SQL reporting and as a result you may refer to SQL reporting service high availability.
    For detail, pls. refer
    http://technet.microsoft.com/en-us/library/bb522745.aspx
    http://www.mssqltips.com/sqlservertip/2335/scale-out-sql-server-2008-r2-reporting-services-farm-using-nlb-part-1/
    http://www.mssqltips.com/sqlservertip/2336/scale-out-ssrs-r2-farm-using-windows-network-load-balancing-part-2/
    http://technet.microsoft.com/en-us/library/hh882437.aspx
    Roger

  • SCOM 2012 SP1 Web Console High Availability Configuration

    We are a large children's hospital, with many ancillary units that we are trying to get to help monitor their servers/devices/applications through the SCOM web console.  Due to the number of connections we have two standalone web console servers. The
    entire environment is running on Windows Server 2012 Standard Edition servers, and we are running SCOM 2012 SP1 UR3.  When accessing the web console directly via the FQDN of the server, everything works great. We have an HLB virtual IP and DNS entry
    (SCOM.domain.com), which works flawlessly when access the default IIS website. However, when accessing the web console URL (http://SCOM.domain.com/OperationManagerdown) Negotiate:Kerberos, NTLM, Negotiate.  I believe the issue lies in the fact that
    the Application Pools are running under the ApplicationPoolIdentity identity, and the HTTP SPN's are using the machine account.  We have tried changing the identity of the application pools to a domain account (the SDK service account), and modified the
    SPN's so the HTTP SPNs are assigned to the domain account.  We also created SPNs for the HLB VIP DNS alias.  But to no avail.  While we realize that Microsoft does not support load balancing of the web console, there has to be a way to make
    it work.  We will have more than 50 connections, so using one web console server is not an option, and asking our users to use two different URLs is not very user friendly.  If anyone has any suggestions or had any success, please let me know. 
    Any help is greatly appreciated!

    Is switching to forms based authentication an option? The extra login might be considered a pain however, but forms plus SSL is the only successful LM configuration we have worked with. We use F5s.
    Sign Up Now for the Infinity Connected™ Full Feature 30 Day Trial   
    Visit the Infinity Connected™ Support Center
    Visit Our Website: http://www.infinityconnected.com
    Visit Our Blog: http://blog.mobieus.com
    Want to just talk shop? 1-800-691-6774 / [email protected]
    Monitoring Information Center | Infinity On-Line | [email protected]

  • 2 x C160 High Availability Configuration Guide

    Hi Folks,
    Any already configured a 2 X C150 OR C160 on HA?
    I dont want the M Series.
    My setup would be
    Outgoing - Ironport 1 - Internet
    Ironport 2 - Internet
    Can anyone enlighten me on the configuration if its possible?
    What will happened if ironport1 will be down, would this mean it will automatically use the Ironport2?
    Im using Domino version 7.02 as my mail server.
    Im thinkin if i can define the two ip addresses of ironport on my smtp relay so that if i can contact the ironport1 it will use the ironport 2
    more power and thank you.
    kira

    You can use DNS MX entry in your SMTP connection document in Domino. Create a DNS MX record to point to both IronPorts and set that in "SMTP MTA relay host" field of connection doc.
    In the MX record you can specify which IronPort is then the primary outbound or with equal values you can have automatic load balancing between both.

  • File Adapter vs BPEL interaction issue on high availability environment

    Hi all,
    i would really appreciate your help on a matter i'm facing about a composite (SCA) deployed on a clustered environment configured for high availability. To help you better understand the issue i briefly describe what my composite does. Composite's instances are started by means of an Inbound File Adapter which periodically polls a directory in order to check if any file with a well defined naming convention is available. The adapter is not meant to read the file content but only its properties. Furthermore, the adapter automatically makes a backup copy of the file and doesn't delete the file. Properties read by the adapter are provided to a BPEL process which obtains them using the various "jca.file.xyz" properties (configurable in any BPEL receive activity) and stores them in some of its process variables. How the BPEL process uses these properties is irrilevant to the issue i'd like to pose to your attention.
    The just described interaction between the File Adapter and the BPEL process has always worked in other non-HA environments. The problem i'm facing is that this interaction stops to work when i deploy the composite in a clustered environment configured for high availability: the File Adapter succeeds to read the file but no BPEL process instance gets started and the composite instance gets stuck (that is, it keeps always running until you don't manually abort it!).
    Interesting to say, if I put a Mediator between the File Adapter and the BPEL, the Mediator instance gets started, that is the file's properties read by the adapter are passed to the mediator, but then the composite gets stuck again 'cos even the mediator doesn't seem to be able to initiate the BPEL process instance.
    I think the problem lies in the way i configured either the SOA infrastructure for HA or the File Adapter or BPEL process in my composite. To configure the adapter, i followed the instructions given here:
    http://docs.oracle.com/cd/E14571_01/integration.1111/e10231/adptr_file.htm#BABCBIAH
    but maybe i missed something. Instead, i didn't find anything about BPEL configuration for HA with SOA Suite 11g (all the material i found refers to SOA Suite 10g).
    I've also read in some posts that for using the db as a coordinator between the file adapters deployed on the different nodes of the cluster, the db must be a RAC! Is that true or is possible to use even another type of oracle db?
    Please, let me know if someone of you has already encountered (and solved :)) a problem like this!
    Thanks in advance,
    Bye!

    Hi,
    thanks for your prompt reply. Anyway, i had already read through out that documentation and tried all settings suggested in it without any luck! I'm thinking the problem could be related to the Oracle DB used in the clustered environment, which is not RAC while all documentation i read about high availability configuration always refers to a RAC db. Anyone knows if a RAC Oracle DB is strictly needed for file adapter configuration in HA cluster?
    Thanks, bye!
    Fabio

  • Tuxedo and High Availability

    Can you provide some information on how Tuxedo can be configured in a high availability
    environment. Specifically running Tux 7.1 on AIX 4.3 with HACMP/ES. I am planning
    on running with a 'cascading N+1' configuration and have concerns over the ability
    of the standby node to take over a failed node succesfully due to config dependancies
    on the machine name. Is there a white paper detailing use of Tux in a high availability
    environment ?

    Found the answers and thought would share it.
    1. Can load balancing be achieved in MP setup or is this a high availability configuration?
    Both - MP supports load balancing and high availability 2. In an MP setup, can a workstation client continue to work even after the master node gets migrated? If so, can we have both (or all nodes and their WSL) listed in WSNADDR for this to happen
    Correct.

  • High Available SRW2024

    I trying to create a high available setup using two SRW 2024 switches and Linux bonded network adapter in mode 1. Each network adapter of the bond is connected to a switch and in case of failure of an adapter or switch, the bonded network adapter will fail over to the other switch using the same MAC address.
    I have an aggregated link setup between the two switches. Furthermore I have spanning tree enabled on both switches. If I do a fail-over test and the bond fails over to the other switch, the port state of the aggregated link changes to blocking which blocks all traffic between the two switches.
    In case I use a single (none aggregated) link I’m able to prevent the blocking by disabling spanning tree on the inter switch port. In this case the switch needs about 10 seconds to learn the new port of the MAC address. However I’m not able to make this work on an aggregated link. The UI does have the option to disable spanning tree on the aggregated port, but when saving spanning tree is still enabled.
    Questions:
    1.       What is the recommended setup concerning inter switch aggregated links and spanning tree configuration for above mentioned high availability configuration.
    2.       Why is it not possible to disable spanning tree on aggregated links? Is this by design or is there some problem with the UI?
    Any help appreciated.

    I have rapid spanning tree enabled on both switches.
    The problem is that I have to disabled spanning tree on the link connecting the two switches together. If not the inter switch link will be blocked the moment I fail over the network bond because it probably thinks there is a redundant path. Is there some other way to prevent the inter switch link from blocking?
    If not, how can I disable spanning tree on the aggregated link? So far I only managed to do this on a normal link, but cannot do it on an aggregated link.

  • High Availability File Adapter in OSB

    If you use the JCA FileAdapter in OSB, it is necessary to use the eis/HAFileAdapter version, to ensure that only one instance of the adapter picks up a file; you must then configure a coordinator, by setting the
    controlDir, inboundDataSource, outboundDataSource, outboundDataSourceLocal, outboundLockTypeForWrite
    parameters.
    controlDir refers to the filesystem, the others to the DB
    This document http://www.oracle.com/technetwork/database/features/availability/maa-soa-assesment-194432.pdf says
    "Database-based mutex and locks are used to coordinate these operations in a File Adapter clustered topology. Other coordinators are available but Oracle recommends using the Oracle Database."
    Using a Oracle Database as coordinator means using RAC, otherwise no HA.
    I wonder if anybody has been successful setting up HAFileAdapter without using a DB?
    If DB is required, I am considering using the good old "native" OSB File Poller, since it doesn't require complicated setup to be run in a cluster... but I don't want to use MFL, I would rather use the XSD-based Native Format. Here comes the second question:
    Is it possible to use the nXSD translator using the OSB Native File Poller - instead of the JCA Adapter?
    Thank you so much for your help - it will be rewarded with "helpful/answered" points .
    pierre

    I wonder if anybody has been successful setting up HAFileAdapter without using a DB?
    I have not tried it but I think there are several options available invlucing writing your own custom mutex. Please find the details in Oracle File and FTP Adapters High Availability Configuration section on this link
    http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/ha_soa.htm#sthref434
    Is it possible to use the nXSD translator using the OSB Native File Poller - instead of the JCA Adapter?
    When you create a JCA Adapter based Proxy Service to read the files, the nXSD translation happens before the proxy service is invoked. JCA Engine first reads the data, translates using nXSD and then invokes the Proxy with the translated content. (You can verify this easily by creating a JCA based file read service and open the test console for it in sbconsole, it will show you XML request instead of native).
    So you can not read the text content using File Transport of OSB and then calling nXSD directly or calling an nXSD based Proxy Service.
    HOWEVER, you certainly can use file and nXSD in a combination if thats what you want.
    1. Create a Synchronous Read File Adapter with an nXSD created for it
    2. Create a Business Service for that Synchronous Read JCA in OSB
    3. Create a File Transport based service in OSB which will read the content of file and then call the Business Service to again read the content (which will include the translation using nXSD defined in step one to convert the content to XML).
    So basically you will need to read the file twice! Once using File Transport Proxy service (which will take care of polling in cluster) and then using Sync Read JCA based business service(which will do nXSD translation). To reduce the impact of reading the file twice you can use trigger files. File Proxy to read trigger file and and invoke JCA business service to read the actual file for that trigger.
    Another alternative can be to create a similar class as present here(http://blogs.oracle.com/adapters/entry/command_line_tool_for_testing) but instead of writing a file it will just return the translated content. Call this class with native content from the File Transport proxy using a Java Callout to do the translation.

  • Generating license for ISE high availability primary/secondary nodes

    We have two ISE servers that will act as primary/secondary in a high availability setup.
    The ISE 1.0.4 installation guide, page 93, mentions that "If you have two Cisco ISE nodes configured for high availability, then you must include both the primary and secondary Administration ISE node hardware and IDs in the license file."
    However, after entering the PAK in the licensing page, the only required fields are:
    - Primary Product ID
    - Primary Version ID
    - Primary Serial No
    In this case, how can i include both primary and secondry HW and IDs?
    Thanks in advance.

    I am refering you a Cisco ISE Nodes for High Availability configuration guide, Please check:
    http://www.cisco.com/en/US/docs/security/ise/1.0/user_guide/ise10_dis_deploy.html#wp1128454

  • Windows NLB for SPF high availability

    I have SPF in my organization as a bridge between VMM and WAP.  One of my design tenants is to configure everything in a high availability configuration and I am having trouble getting SPF to work in an HA environment.
    If I set the binding on SPFs website to the NLB name and use the certificate that I have for it, WAP is unable to write anything to the SPF database.  If I set SPF back to no binding and use the certificate that is created for the machine itself, WAP
    works exactly as it should.
    The NLB that I am using is windows NLB and I have it configured in unicast /w all ports being forwarded to the SPF server.  I have just one NIC per server on my SPF instance. 
    I understand that SPF is supported and have configured it using this guide... (link: http://www.hyper-v.nu/archives/mvaneijk/2013/06/system-center-2012-sp1-service-provider-foundation-high-availability/) It works from the server to the VMM server (using
    the web test as prescribed) but gets an error 12 that references the provider section of the site when I try to create the SPF connection from WAP.
    Does anybody have any ideas? 

    thank you for the prompt reply.  The certificates that I am using are created by my enterprise CA which is automatically trusted by my WAP servers.  The servers all get a client/server authentication certificate as part of AD's auto-enrollment. 
    I have created a custom certificate with just the name of the NLB.  Do i need to specify the actual server names of the SPF environment as SAN's in the certificate request?  Could that be the problem? 
    Single server SPF uses a certificate that is trusted with the single server name
    NLB environment uses a custom request certificate that has client/server EKU and the only name that is registered is the DNS name of the NLB's IP address.  This is the same name that I put in the binding for the spf website on my SPF servers. 
    (this method is broken)

  • Oracle Identity Federation - High Availability

    Hello,
    We are trying to figure out the high availability options supported by the Oracle Identity Federation. While reading the documentation we find it a bit confusing. We read the OIF Administrator Guide here: http://download.oracle.com/docs/cd/E10773_01/doc/oim.1014/b25355/advtopics.htm#CHDBCDFG
    In Section "9.4 High Availability" it said that "Oracle Identity Federation supports the Cold Failover Cluster (CFC) or active-passive high availability configuration,". In the Application Server 10g guide also said the same and explicitly said that the active-active configuration is not supported for the OIF.
    Then in Section "9.5 Setting Up a Load Balancer with Oracle Identity Federation" it explains how to set up a load balancer for the OIF. When it explains this it says that we can have several instances of OIF in different machines, configured with a load balancer. All these instances share the same transient database where the sessions are stored.
    Which is the difference between this load-balancer-based configuration and an active-active high availability configuration? If one node of the load-balancer configuration goes down, the sessions administered by him are lost? That is the difference?
    Thanks!
    Leonardo

    Hi
    I am not very sure about High Availability configuration but for Load balancer as mentioned in the document, You have to have both the instances sharing transient database where sessions will be stored.
    If both the OIF instances are not sharing transient database and you have LB sharing load, It will not work as sessions will be store in memory. So sessions from one OIF instance will not be known and available to the other instance of OIF.
    Thanks
    Kiran Thakkar

  • Wireless access high availability

    Hi,
    I've this scenario:
    - two WLC 4404 located in different sites (site A and site B) with sw release 5.x
    - a wireless mesh area that is linked to these two sites
    - site A and site B are connected via L3 wired link
    - routers that act as a default gateway for the client are located in site A and site B
    Reading some whitepapers on Cisco website, I understand that I have to configure WLC-A as a primary controller and WLC-B as a secondary backup controller with different mobility groups but I dont' understand where to configure default gateway IP address for my wireless clients
    How is a high availability  configuration for the client connected to the wireless mesh area in this scenario?
    Where should be configured the default gateway IP address for my wireless clients?
    Thanks
    antonio

    is it possible to have one wlc in a city and the other wlc in another city??
    Nothing wrong with this setup.  It all boils down to your routing.  
    then both WLCs management interfaces should be on the same vlan
    Not necessary.  I have seen setups where the use the same VLANs.  I've seen setup where they use different VLANs.  Again, it all boils down to your routing.  

  • ASA 5520: Configuring Active/Standby High Availability

    Hi,
    I am new to Cisco firewalls. We are moving from a different vendor to Cisco ASA 5520s.
    I have two ASA 5520s running ASA 8.2(5). I am managing them with ASDM 6.4(5).
    I am trying to setup Active/Standby using the High Availability Wizard. I have interfaces on each device setup with just an IP address and subnet mask. Primary is 10.1.70.1/24 and secondary is 10.1.70.2/24. The interfaces are connected to a switch and these interfaces are the only nodes on this switch. When I run the Wizard on the primary, configure for Active/Standby, enter the peer IP of 10.1.70.2 and I get an error message saying that the peer test failed, followed by an error saying ASDM is temporarily unable to connect to the firewall.
    I tried this using a crossover cable to connect the interfaces directly with the same result.
    Any ideas?
    Thanks.
    Dan

    The command Varun is right.
    Since you want to know a little bit more about this stuff, here goes a bit. Every interface will have a secondary IP and a Primary IP where the Active/Standby pair will exchange hello packes. If the hellos are not heard from mate, the the unit is delcare failed.
    In case the primary is the one that gets an interface down, it will failover to the other unit, if it is the standby that has the problem, the active unit will declare the other Unit "standby failed). You will know that everything is alright when you do a show failover and the standby pair shows "Standby Ready".
    For configuring it, just put a secondary IP on every interface to be monitored (If by any chance you dont have an available secondary IP for one of the interfaces you can avoid monitoring the given interface using the command no "monitor-interface nameif" where the nameif is the name of the interface without the secondary IP.
    Then put the commands for failover and stateful link, the stateful link will copy the connections table (among other things) to avoid downtime while passing from One unit to another, This link should have at least the same speed as the regular data interfaces.
    You can configure the failover link and the stateful link in just one interface, by just using the same name for the link, remember that this link will have a totally sepparate subnet from the ones already used in firewall.
    This is the configuration
    failover lan unit primary
    failover lan interface failover gig0/3
    failover link failover gig0/3
    failover interface ip failover 10.1.0.1 255.255.255.0 standby 10.1.0.2
    failover lan unit secondary
    failover lan interface failover gig0/3
    failover link failover gig0/3
    failover interface ip failover 10.1.0.1 255.255.255.0 standby 10.1.0.2
    Make sure that you can ping each other secondary/primary IP and then put the command
    failover first on the primary and then on the secondary.
    That would fine.
    Let me know if you have further doubts.
    Link for reference
    http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a008080dfa7.shtml
    Mike

  • Configure OAS with dataguard for high availibility

    Hi,
    We use oracle application server to connect to 10g database. This prod db has a physical standby db. I am trying to do a failover test where I make the physical standby as primary and connect to that db from our application.
    I change the dads.conf file to point to the physical db. but I get ORA-01033: ORACLE initialization or shutdown in progress Error-Code:1033 Error TimeStamp:Fri, 8 May 2009 21:57:55 GMT
    but this physical db is up & open and in read write stage.
    The db name is prod and prod_stby. is this because of the different names ??

    Hello,
    I think you need to keep the infrastructure database names the same. Have you considered using the failover capabilities within 10gAS for your application servers? This is different from
    Data Guard standby database option. Here is a good Metalink note on how to setup and configure 10g Application Server failover:
    Understanding OracleAS 10g High Availability - A Roadmap- Metalink Note #412159.1
    Cheers,
    Ben

  • Configuring two 11g OID servers in High Availability mode.

    I have OID1 server where I have installed OID11g and WLS using SSL Port 3131 and Non SSL Port 3060. The ldap set up is working as the sqlnet connections are using ldap adapter to resolve the request.
    I have OID2 server where I have installed OID11g using the same port.
    Now, I want to setup a cluster for these two so that the the load balancer will automatically route the requests to either of the two servers so that if one is unavailable, the other will fill the request. I am following "Configuring High Availability for Identity Management Components" document, but it is not very what steps needs to be followed.
    Any suggestion will be appreciated;
    I am also having problem using ldapbind or any of the oid commands as it gives "unable to locate message file: ldap<language>.msb" despite the fact that I am seting all the env vars such as ORACLE_HOME, ORACLE_INTANCE, ORA_NLS33 and so on.

    You don't need to setup a cluster for Load balancer. The Load balancer configuration can point to both the server and depending on the configuration in LBR act in failover and load balanced mode. All you need to take care of is that the two OID servers are using the same schema.
    When installing first OID server it gives a option to install in cluster mode and when installing the second server you can use the option to expand the cluster created in first installation. But that should not stop you from configuring OID in highly available mode using Load balancer as explained above.
    "unable to locate message file: ldap<language>.msb" occurs if you have not set the ORACLE_HOME variable. See that it is set to <MiddlewareHome>/Oracle_IDM1 if you have used the defaults.
    Hope this helps,
    Sagar

Maybe you are looking for