UCCX 7.0 High Availability IP Addressing

Hi,
I am installing UCCX in HA mode. The servers are on the same site and have a RTT of less than 2 ms.
I am wondering whether to put them in the same VLAN or in separate VLANs. The design guide does not seem to state a preference.
Please let me know what approach works for you

Hello James,
As you mention HA over IP WAN its just support under UCCX 8.0, for now UCCX 7.0 does not support these. That infomation can be check in the SRND.
http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/crs/express_7_0/design/guide/uccx70srnd.pdf
Page 66 says:
"Cisco Unified CCX high availability requires that the Cisco Unified CCX Engine and Database components and the CTI Managers with which the Cisco Unified CCX servers communicate be located in the same campus LAN and that the maximum round-trip delay between these servers be less than 2 ms"
HTH
Please rate this post if was helpful
Walter Solano
CCVP, Cisco UCCX Specialist
Cisco IP Communications Express Specialist

Similar Messages

  • High available address with zone cluster

    Hi,
    I run through Sun documentation, and didn't fully understand, how should I ensure the high availability of an IP address "inside" Solaris zone cluster.
    I installed a new zone cluster with the following configuration:
    zonename: proxy
    zonepath: /proxy_zone
    autoboot: true
    brand: cluster
    bootargs:
    pool:
    limitpriv:
    scheduling-class:
    ip-type: shared
    enable_priv_net: true
    net:
         address: 193.219.80.85
         physical: auto
    sysid:
         root_password: ********************
         name_service: NONE
         nfs4_domain: dynamic
         security_policy: NONE
         system_locale: C
         terminal: ansi
    node:
         physical-host: cluster1
         hostname: proxy_cl1
         net:
              address: 193.219.80.92/27
              physical: vnet0
              defrouter: 193.219.80.65
    node:
         physical-host: cluster2
         hostname: proxy_cl2
         net:
              address: 193.219.80.94/27
              physical: vnet0
              defrouter: 193.219.80.65
    clzc:proxy>
    clzc:proxy>
    After installation, I've tried to configure a new resource group with a logicalhostname resource in it inside the zone cluster:
    /usr/cluster/bin/clresourcegroup create -n proxy_cl1,proxy_cl2 sharedip
    and got the following error:
    clresourcegroup: (C145848) proxy_cl1: Invalid node
    Is there any other way to make an IP address inside a "proxy" zone cluster high available?
    Thanks.

    I have rapid spanning tree enabled on both switches.
    The problem is that I have to disabled spanning tree on the link connecting the two switches together. If not the inter switch link will be blocked the moment I fail over the network bond because it probably thinks there is a redundant path. Is there some other way to prevent the inter switch link from blocking?
    If not, how can I disable spanning tree on the aggregated link? So far I only managed to do this on a normal link, but cannot do it on an aggregated link.

  • UCCX stand alone to high availability

    I have a production UCCX signle server deployment on 7.0(1). I am going to upgrade this to high availability and just needed to check out the procedure to upgrade the existing production server to the first node in HA. My 2nd server is already built, with UCCX and SQL 2000 so it is ready to go. Upon reading the upgrade guide it seems like all I need to do is upgrade SQL and then apply the HA license (it is sort of vague I feel for something this crucial). So you do not need to re-run the install to define first node?
    Any opinion (or link to a more detailed document) will be appreciated.

    I know this is an old post but for the sake of the community the answer is yes. if it was set up as a
    stand alone node to begin with it can be changed to first node using the cet.bat tool to change the startup behavior of the appadmin page as described here.
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    http://www.cisco.com/en/US/products/sw/custcosw/ps1846/products_tech_note091
    86a00805a7acc.shtml
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    1. On the CRS server, go to C:\Program Files\wfavvid\, and double-click the cet.bat file.
    2. Click No when the warningappears.
    3. Right-click the AppAdminSetupConfig object in the left pane, and select the option Create.
    4. Click OK.
    5 .In the new window, click the com.cisco.crs.cluster.config.AppAdminSetupConfig tab.
    6. Choose Fresh Install from the drop-down list in order to change the value for Setup  State.
    7. Click OK
    8. After you create the AppAdminSetupConfig object, log in with the user name  Administrator and password ciscocisco, and then run the setup again.
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    this will allow you to change the node from stand alone to first node  by re-running the initial UCCX setup wizard.
    Don Mynes

  • OIM 11g High Availability Deployment

    Hi Experts,
    I'm deploying OIM 11g in High Available schema, following Oracle docs: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF, I have succesfully installed and configured OIM & SOA in weblogic domain on 'OIMHOST1', trying to propagate the configuration from 'OIMHOST1' to 'OIMHOST2' I have packed (using pack.sh) the domain on 'OIMHOST1' and unpacked (using unpack.sh) it to 'OIMHOST2' so I have updated the NodeManager executing setNMProps.sh and finally Ihave started the NodeManager. In order to Test everything is fine and following the documentation I'm traying to perform the following steps, but I'm not succeed
    I'M MUST TO SAY THAT I'M RUNNING ON SINGLE STANDARD EDITION DB INSTANCE AND NOT RAC AS MENTIONED IN ORACLE DOCS, PLEASE CLARIFY IF RAC IS REQUIRED, FOR NOW I'M IN DEVELOPMENT ENVIRONMENT, SO I THINK RAC IS NOT REQUIRED FOR NOW, PLEASE CLARIFY
    8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
    Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
    Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
    Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1&
    Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console.
    Here its not possible start AdminServer on OIMHOST2, first of all, it looks like boot.properties file under WLS_OIM_DOMAIN_HOME/servers/AdminSever/security is not valid, the first time I try to execute startWeblogic.sh script, it ask for username/password, I have updated boot.properties (vi boot.properties) and manually set clear username and password, this time startWeblogic.sh script passed this stage, but fails:
    <Error> <util.install.help.BuildMasterHelpSet> <BEA-000000> <IOException ioe java.io.IOException: No such file or directory>
    <Error> <oracle.adf.share.config.ADFMDSConfig> <BEA-000000> <MDSConfigurationException encountered in parseADFConfigurationMDS-01330: unable to load MDS configuration document
    MDS-01329: unable to load element "persistence-config"
    MDS-01370: MetadataStore configuration for metadata-store-usage "writeable" is invalid.
    MDS-00503: The metadata path "/u01/app/oracle/product/Middleware/user_projects/domains/IDMDomain/sysman/mds" does not contain any valid directories.
    I have verified that this directory "mds" does not exists, as reported by the IOException, in OIMHOST2, but it exists in OIMHOST1. from here its not possible for me following Oracle's documentation, I test this starting Adminserver in OIMHOST1, and starting WLS_SOA2 and WLS_OIM2 managed servers from OIMHOST1 AdminServer console, I have tested 2 ways:
    1.- All managed servers in OIHOST1 are shutdown, for this, managed servers in OIMHOST2 works as expected
    2.- All managed servers in OIMHOST1 are RUNNING, for this, first I have started SOA2 managed server, after that, I have fired OIM2 managed server, when it finish boot process the following message appears in server's output:
    <Warning> <org.quartz.impl.jdbcjobstore.JobStoreCMT> <BEA-000000> <This scheduler instance (servername.domainname1304128390936) is still active but was recovered by another instance in the cluster. This may cause inconsistent behavior.>
    Start the WLS_SOA2 managed server using the WebLogic Administration Console.
    Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started.
    8.9.3.9 Validate the Oracle Identity Manager Instance on OIMHOST2
    Validate the Oracle Identity Manager Server instance on OIMHOST2 by bringing up the Oracle Identity Manager Console using a web browser.
    The URL for the Oracle Identity Manager Console is:
    http://oimvhn2.mycompany.com:14000/oim
    Log in using the xelsysadm password.
    Your help is highly apprecciated
    Regards
    Juan

    Hi Vaasu,
    I have succeeded deploying OIM in HA, just now my customer and I are working on the installation of webtier. Now I have a better understand about HA concepts and the way weblogic works -really nice, but little tricky-
    All the magic about HA is configuring properly the network interfaces in each Linux boxes (our case) so, first of all you need to create 2 new floating IP's on each Linux boxes (google: how to create virtual Ip in linux, if you don't know) clone and modify your 'eth0' network script to create the virtual IPs
    Follow the procudere in the HA guide: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF
    create DB schemas with RCU
    install weblogic
    install SOA
    patch SOA
    install IAM
    ---if you are working on a virtual machine is good idea to take a snapshot here---
    Create and configure the weblogic domain (special attentention whe configuring the cluster), see step 13 of 8.9.3.2 Creating and Configuring the WebLogic Domain for OIM and SOA on OIMHOST1, here you need to cofigure:
    For the oim_server1 entry, change the entry to the following values:
    Name: WLS_OIM1
    Listen Address: the IP that is confured in eth0:1 of Linux box1
    Listen Port: 14000
    For the soa_server1 entry, change the entry to the following values:
    Name: WLS_SOA1
    Listen Address: the IP configure on eth0:2 of Linux box1
    Listen Port: 8001
    For the second OIM Server, click Add and supply the following information:
    Name: WLS_OIM2
    Listen Address: the IP configured on eth0:1 of Linux box2
    Listen Port: 14000
    For the second SOA Server, click Add and supply the following information:
    Name: WLS_SOA2
    Listen Address: the IP configured on eth0:2 of Linux box2
    Listen Port: 8001
    Click Next.
    On Step 16 ensure you are using the UNIX tab to configure the machines, also ensure that for machine1 you use the IP configured on the eth0 interface of Linux box1, the same for machine2
    please confirm you have performered 8.9.3.3.2 Update Node Manager on OIMHOST1
    if everything is ok you must be able to start the AdminServer as described in the guide.
    configure OIM: 8.9.3.4.2 Running the Oracle Identity Management Configuration Wizard, in my case I don't need LDAPsync, I have skipped this section, if you configure properly OIM, then you mus perform 8.9.3.5 Post-Configuration Steps for the Managed Servers
    resrtar AdminServer then from the weblogic console, start OIM and SOA if node manager is properly configured SOA and OIM must run properly, update deployment mode and coherence as described in the guide and verify that OIM run perfectly in Linux box1.
    Propagate OIM from Linux box1 to Linux box2 as described in the guide, using pack and unpack (you MUST use the same filesystem directory structure on both Linux boxes)
    Update and start NodeManager as described in the guide
    VERY IMPORTAN OBSERVATION
    the guide say:
    8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
    Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
    Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
    JUAN OBSERVATION:
    IS NOT POSSIBLE TO START OR STOP ADMINSERVER ON HOST2 SINCE ADMIN SERVER WERE CONFIGURED TO LISTEN ON THE IP ADDRES OF eth0 INTERFACE ON HOST1, SO, ITS NOT POSSIBLE TO PLAY IT ON HOST2, I THINK AND ADDITIONAL PROCEDURE SHOULD BE FOLLOWED TO CONFIGURE ADMINSERVER IN HA IN A ACTIVE-PASSIVE MODE
    Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1& -----NOT APPLICABLE
    Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console. -----NOT APPLICABLE
    Start the WLS_SOA2 managed server using the WebLogic Administration Console. ----START SOA2 FROM THE CONSOLE RUNNING ON HOST1, IT DOESN'T MATTER
    Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started. ------ START OIM2 FROM THE CONSOLE RUNNING ON HOST1
    HERE YOU MUST BE ABLE TO LOGIN TO OIM2 SERVER AS DESCRIBED IN THE GUIDE, YOU DON'T NEED TO EXECUTE config.sh SCRIPT THIS SHOULD WORK AS DESCRIBED.
    Server migration should work straight-forward if you have configured the floating IPs as described, I have not configured the persistence yet since my customer does not have the skills to share a storage.
    I hope this helps, and feel free to comment or complement.
    By the way, did you know how to set up a valid SSL certificate in Windows 2003 server??? I need it to test and Exchange 2007 I'm tryin to integrate
    Regards
    Juan

  • High Availability of BPEL System

    We are having a High Availability architecture configuration for the BPEL System in our production environment.
    The BPEL servers are clustered in the middle tier of the architecture and RAC is used in the database tier of the architecture.
    We have 5 BPEL processes which are getting invoked within each other. For eg:
    BPELProcess1 --> BPELProcess2 --> BPELProcess3, BPELProcess4 &
    BPELProcess4 --> BPELProcess5
    Now when all the above BPEL processes are deployed on both the nodes of the BPEL server, how do we handle the end point URL's of these BPEL servers.
    Should we hardcode the end point URL in the invoking BPEL process or should we replace the IP address of the two nodes of the BPEL server with the IP address of the load balancer.
    If we replace the IP address of the BPEL server with the IP address of the load balancer, it will require us to modify, redeploy and retest all the BPEL processes again.
    Please advise
    Thanks

    The BPEL servers are configured with active - active topology and RAC is used in the database tier of the architecture.
    BPEL Servers is not clustered. Load Balancer is used in front of the two nodes of the BPEL servers.

  • JDBC adapter connected to a DB in high availability!

    Hi folks,
    I have finished my scenario File -> XI -> JDBC and now I’m preparing to transport it to QAS. I found that Data Base of QAS is in two cluster nodes. I have two hostnames to fill the parameter <b><IP address></b> and I don’t know which hostname I should use! Is it supposed use both? …or I need another one virtual?
    Connection: <b>jdbc:oracle:thin:@<IP address>:<listener port>:<instance name (database name)></b>
    Thanks in advance,
    Ricardo.

    I don't know this mechanism of high availability with 2 IP, but if you really have 2 hostname you should think to a method to switch from a node to the other in easy way, deactivating one and activating the other (for instance with the <a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/45/0c86aab4d14dece10000000a11466f/frameset.htm">Controlling a Communication Channel Externally</a>)
    I suggest you to verify how it will be in production instance: real cluster ? switch over ? virtual IP ?
    Regards,
    Sandro

  • Best practice for High availability design, HSRP

    Hi,
    I am planning to create High Availability for LAN to WAN connectivity.
    But I want to know your opinion about the best way how to do this. I googled for a solution/best way how to do this, but I didn't found in my opinion right answer.
    The situation:
    I have 2 3945E Routers and 2 3560 switches. The design that I am planning to implement is below.
    The main goal is to have redundant connection, whatever one of the devices will fail. For example, if the R1 will fail, R2 should become active, if the SW1 will fail, the SW2 will take care about reachability and vice versa. The router 1 should be preferred always, if the link to ISP isn't down, because of greater bandwidth. So why am I drown 2 connections to 2 separate switches. If the SW1 will fail, I will still have a connection to WAN using R1 router.
    The Router interface should be configured with sub interfaces (preferred over secondary IP address of interface), because more than 10 subnets will be assigned to the LAN segment. The routers have 4 Gi ports.
    HSRP must be enabled on LAN side, because PC's on LAN must have redundant def. getaway.
    So, the question is - what is the best and preferred way to do this?
    In my opinion, I should use BVI and combine R1 routers 2 interfaces in to logical one and do the same for the R2.
    Next, turn the router in to L3 switch using IRB and then configure HSRP.
    What would be your preferred way to do this?

    Hi Audrius,
    I would suggest you to go with HSRP. GLBP you will use where you want load balance.
    I think the connectivity between your Routers (3945) and switches (3560) is gigabit connection which is high speed. So keep one physical link from your switches to each router and do HSRP on those router physical interfaces.
    In this way you will have high availability like if R1 fails then R2 will take over.
    Regarding the config see the below which I have for one of my Customer DC.
    ACTIVE:
    track 1 interface GigabitEthernet0/0 line-protocol
    track 2 interface GigabitEthernet0/0 line-protocol
    interface GigabitEthernet0/1
    ip address 10.10.10.12 255.255.255.0
    ip nat inside
    ip virtual-reassembly
    duplex full
    speed 100
    standby use-bia scope interface
    standby 0 ip 10.10.10.10
    standby 0 priority 110
    standby 0 preempt
    standby 0 authentication peter2mo
    standby 0 track 1 decrement 30
    standby 0 track 2 decrement 30
    STANDBY:
    track 1 interface GigabitEthernet0/0 line-protocol
    interface GigabitEthernet0/1
    ip address 10.10.10.11 255.255.255.0
    ip nat inside
    ip virtual-reassembly
    duplex full
    speed 100
    standby use-bia scope interface
    standby 0 ip 10.10.10.10
    standby 0 priority 90
    standby 0 authentication peter2mo
    standby 0 track 1 decrement 30
    Please rate the helpfull posts.
    Regards,
    Naidu.

  • WebLogic Server clusters not high available!!

              I'm working with WebLogic Server 6.0.
              I try to connect to cluster using explicit IP addresses.
              For this case,first ,I cluster server A,B and C ,then client try to look up home
              interface at server A.
              After server A return home reference ,client creates EJB object and calls method
              using that reference .
              If server A fail, it still working with others server ,B and C.
              But if new client try to find home reference at server A ,he can't.
              This situation means that WebLogic Server cluster is not high availability or
              I did something wrong with configuration?
              And if I missed something ,how can I fix it?
              Thanks for your attention.
              

    Hi King,
              If you look up at a non-existent IP/port, then it will not respond. That is
              expected behavior.
              You should not be using an explicit IP like that. At least use DNS round
              robin.
              Peace,
              Cameron Purdy
              Tangosol Inc.
              Tangosol Coherence: Clustered Coherent Cache for J2EE
              Information at http://www.tangosol.com/
              "KING TEAM" <[email protected]> wrote in message
              news:3c449200$[email protected]..
              >
              > I'm working with WebLogic Server 6.0.
              > I try to connect to cluster using explicit IP addresses.
              >
              > For this case,first ,I cluster server A,B and C ,then client try to look
              up home
              > interface at server A.
              >
              > After server A return home reference ,client creates EJB object and calls
              method
              > using that reference .
              >
              > If server A fail, it still working with others server ,B and C.
              >
              > But if new client try to find home reference at server A ,he can't.
              >
              > This situation means that WebLogic Server cluster is not high availability
              or
              > I did something wrong with configuration?
              >
              > And if I missed something ,how can I fix it?
              >
              > Thanks for your attention.
              >
              >
              

  • HTTP Server High Availability

    Hello All.
    I have a question regarding OC4J and HTTP server High Availability.
    I want to do something like the Figure 3-1 of the Oracle Application Server High Availability Guide 10.1.2. See this link
    http://download-east.oracle.com/docs/cd/B14099_11/core.1012/b14003/midtierdesc.htm#CIHCEDFC
    What I have now is the following:
    Three hosts
    Two of them are an OAS 10.1.2 which we already configured the Cluster and deployed our applications (used this tutorial: http://www.oracle.com/technology/obe/obe_as_1012/j2ee/deploy/j2eecluster/farmcluster.htm)
    Let's say this nodes are:
    - host1
    - host2
    The other one is the Oracle WebCache stand alone (will act as Load Balancer). We will call this
    - hostwc3
    We already configured the WebCache as Load Balancer and is working just fine. We also configured the session replication successful and work great with our applications.
    What we have not clear is the following:
    When a client try to visit http://hostwc3/application/ the LOAD BALANCER routes him to, let's say http://host1/application/ and in the browser's URL will not show the Virtual Server anymore (the webcache server) and will show the actual real Apache address (host1 )that is attending him. IF we "kill" on ENTIRE host1 (apache, oc4j, etc..) the clients WILL perceive the down and if they try to press F5, the will try to access to an Apache that doesn't is up and running.... The behavior expected is that the browser NEVER shows the actual Apache URL, so, when some apache goes down, the client do not disconnect (as it happens with and OC4J downfall ) and always works with the "virtual web server".
    I came up with some ideas but I want you Guys to give me an advice:
    - In Web Cache, do not route for load balancing to Apache, and route the Oc4J directly (Is this possible?)
    - Configure a HTTP Server Cluster, this means that we have to have a "Virtual Name"to the Apaches (two of them). Is this possible? how?
    - Use the rewrite mode of the Apache. Is this a good idea?
    - Any other idea how to fix the Apache "Single Point Of Failure" ?
    According with the figure 3-1 ( Link above ) we do can have HTTP Server in a cluster. But I have no idea how to manage it or configure it.
    Thanks in advance any help!

    You cannot point Outlook Anywhere to your DAG cluster IP address. It must be pointed to the actual IP address of either server.
    For no extra cost DNS round robin is the best you will get, but it does have some drawbacks as it may give the IP address of a server you have taken down for maintenance or the server has an issue.
    You could look to implement a load balancer but again if you are doing this for high availability then you want more than one load balancer in the cluster - otherwise you've just moved your single point of failure.
    Having your existing NAT and just remembering to update it to point to the other server during maintenance may suit your needs for now.
    If you can go into more detail about what the high availability your business is looking to achieve and the budget we can suggest the best method to meet those needs for the price point.
    Have a great day
    Oliver
    Oliver Moazzezi | Exchange MVP, MCSA:M, MCITP:Exchange 2010,Exchange 2013, BA (Hons) Anim | http://www.exchange2010.com | http://www.cobweb.com | http://twitter.com/OliverMoazzezi

  • Migrating Dimension Server from local installation to High Availability

    Hi Team,
    When Hyperion was installed on the client site (11.1.2.3), there was no requirement for EPMA to become highly available.
    Now that has become a requirement to make it highly available in the event that one of the virtual machine is unreachable.
    In the event that a server becomes unavailable, it is acceptable for a manual failover procedure to be done.
    What steps can be taken to prepare for such an event?
    Possible outcome:
    Configure another dimension server on the second Virtual machine and leave it manually switched off.
    Modify the IIS path to point to a clustered location accessible by both machines.
    In the event of a failover, manually bring up the dimension server on the other machine.
    This is yet to be implemented successfully.
    Current Scenerio:
    1 x Windows Machine hosting:
      Foundation Services
      Calculation Manager
      Performance Architect
      Planning
      Reporting and Analysis
      Repository location on a shared mount (Clustered filesystem)
      FDQM
      Essbase
      Analytic Provider Services
      Essbase Integration Services (not used)
      Essbase Studio (not used)
    1 x Linux Machine
      Essbase Server
      Arborpath on a shared mount
    Aim: Add 2 new Virtual Machines ( 1 x Windows App Server + 1 x Linux Essbase Server) for high availability.
    Progress:
      Windows Machine components clustered:
      Foundation Services
      Calculation Manager
      Planning
      Reporting and Analysis
      Essbase
      Analytic Provider Services
      Linux Essbase Machines:
      Configured Active/Passive using OPMN
    Challenge faced:
    Failover scenerio can be a manual list of steps to provide functionaility.
    Please provide support on introducing failover steps for dimension server.

    Remote Desktop/Preferences/Task Server
    Add the FQDN or I.P address of your server in this pane on your book(basically your pointing your book to your installed software on the other machine). Setup your server with the installs you want it to make, and administer it via your book (or any machines with another unlimited ARD) IF you want report generation, you would schedule the clients to send the info to the server. A task server can only do 2 things, install packages or change client settings, thats it.

  • Topology.svc - Endpoints - Web Services High Availability

    Hi,
    I was recently performing some simple DRP tests before going to production and i faced some issues i never encountered before..
    (followed
    http://blogs.msdn.com/b/besidethepoint/archive/2011/02/19/how-i-learned-to-stop-worrying-and-love-the-sharepoint-topology-service.aspx for usefull commands related to enpoints)
    My farm:
    SP 2013 - CU August 2013
    2 WFE (WFE1, WFE2)
    2 App (App1, App2) : Most services started on both servers (UPSS on APP1, UPS on both) - Central Admin on Both.
    SQL Cluster
    At normal state the command : (Get-SPTopologyServiceApplicationProxy).ApplicationProxies | Format-List *
    returns > ServiceEndpointUri :
    https://app1:32844/Topology/topology.svc
    (if i'm not wrong, this topology.svc can run on only one server at a time)
    I stopped WFE1, no pb, the NBL (appliance) is doing the job.
    Then I stopped the App1 and started to have some issues. (most enpoints not balanced to app2)
    I run the job "Application Addresses Refresh Job"
    Or launch PS command : Start-SPTimerJob job-spconnectedserviceapplication-addressesrefresh
    Wait 20 sec.
    A few endpoints are now on APP2 (MMS, Search), it seems to work, i reached my web page.
    I ask a mate to try and he got the "sorry we encountered and error..." > Can't load user profile.
    I refreshed my browser and got the same error.......
    ULS review, I can see that some svc request (most about user, profiledbcacheService) are still on APP1 !!
    A failure was reported when trying to invoke a service application: EndpointFailure Process Name: w3wp Process ID: 6784 AppDomain Name: /LM/W3SVC/93617642/ROOT-1-130445830237578923 AppDomain ID: 2 Service Application Uri: urn:schemas-microsoft-com:sharepoint:service:e8315f8e5d7d4b1b90876e3b0043a4ae#authority=urn:uuid:164efb17f28c4d2d9702ce3e86f0c0e8&authority=https://app1:32844/Topology/topology.svc
    Active Endpoints: 1 Failed Endpoints:1 Affected Endpoint:
    http://app1:32843/e8315f8e5d7d4b1b90876e3b0043a4ae/ProfileService.svc
    The command (Get-SPTopologyServiceApplicationProxy).ApplicationProxies | Format-List *
    Still return me that my topology.svc is on
    https://app1:32844/Topology/topology.svc
    But App1 is down !!
    If my understanding is ok: Normally, the internal round robin loadbalancer (Application Discovery and Load Balancer Service, started on all servers, not configurable) should manage this.
    Application Addresses Refresh Job is running each 15 min. and refreshs available endpoints, using the topology.svc
    But, the topology.svc called is always on APP1 which is down !
    At this time, i haven't found why SharePoint is not detecting that APP1 is down and is not
    automatically recreating a topology service on another available server.....
    If you have any idea...your help is welcome :)
    Regards,
    O.P

    Hi  ,
    For achieving Web Services High Availability, you need to make sure service-applications have more than one server to service them to increase back-end resiliency to a SharePoint server dropping off the network.
    You can refer to the blog:
    http://blogs.msdn.com/b/sambetts/archive/2013/12/05/increasing-service-application-redundancy-high-availability-sharepoint.aspx
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support,
    contact [email protected]
    Eric Tao
    TechNet Community Support

  • Multi Site SQL High Availability group

    Hi,
    We have our primary Data Centre where we have 2 MS SQL 2012 Enterprise Servers and 2 SharePoint 2013 Enterprise servers.
    We have SQL High Availability group implemented on our primary Site (Site A)
    Site A has subnet of 192.168.0.0.
    We recently added a new DR Site (Site B). Site B we have MS SQL 2012 Enterprise Servers.  Site B have a subnet of
    172.1.1.0
    Both sites are connected via a VPN Tunnel. MS SQL 2012 Enterprise Server on Site B has been added in have SQL High Availability
    group of Site A.
    SQL High Availability group have 2 IPs 192.168.0.32 and 172.1.1.32. SQL Listener have 2 IPs 192.168.0.33 and 172.1.1.33
    We want to make sure that if Site A completely down then Site B work as an active Site. But when Site A down we are unable
    to ping High Availability group and Site B is unable to work as active site. SQL and SharePoint services are completely down.
    Site A has AD(Primary Domain  Controller) and Site B has ADC(Additional Domain Controller).
     Site A has witness server.
    We are using Server 2012 Data Centre
    Please suggest.
    Farooq

    SharePoint is not the same as any other applications. The DR site has to be a completely different farm from your production. This means that the SharePoint_AdminContent and config databases on both farms are different and should not be included in the Availability
    Group (they do not support asynchronous Availability Group.) Only content databases and other SharePoint databases supported in an Availability Group should be included as per this
    TechNet article. Have a look at this
    blog post for your networking configuration.
    The reason your Windows cluster service goes down in the DR data center when the when your primary data center goes down is because the cluster has no majority votes. You have 4 available votes in your cluster - 3 in the primary data center in the form of
    the 2 Availability Group replicas and the witness server and 1 in the DR data center. If the Windows cluster does not have majority of votes, it automatically shuts down the cluster. This is by design. That is why when this situation happens, you need to force
    the cluster to start without a quorum as described in
    this article (Windows Server 2012 R2 introduced the concept of dynamic witness and dynamic quorum to address this concern.) By default, the IP address of the Availability Group listener name will be registered on all of the DNS servers if the cluster is
    running Windows Server 2012 and higher (this is the RegisterAllProvidersIP property that Samir mentioned.) However, you need to flush the DNS cache on the SharePoint web and application servers and point them to the correct IP address on the DR server after
    failover. This is because the default TTL value of the Availability Group listener name is 20 minutes. 
    It is important to define your DR strategies and objectives when designing the architecture to make sure that the solution will meet your goals. This will also help you automate some of the processes in the DR data center when failover or a disaster occurs.
    Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • ASA 5520: Configuring Active/Standby High Availability

    Hi,
    I am new to Cisco firewalls. We are moving from a different vendor to Cisco ASA 5520s.
    I have two ASA 5520s running ASA 8.2(5). I am managing them with ASDM 6.4(5).
    I am trying to setup Active/Standby using the High Availability Wizard. I have interfaces on each device setup with just an IP address and subnet mask. Primary is 10.1.70.1/24 and secondary is 10.1.70.2/24. The interfaces are connected to a switch and these interfaces are the only nodes on this switch. When I run the Wizard on the primary, configure for Active/Standby, enter the peer IP of 10.1.70.2 and I get an error message saying that the peer test failed, followed by an error saying ASDM is temporarily unable to connect to the firewall.
    I tried this using a crossover cable to connect the interfaces directly with the same result.
    Any ideas?
    Thanks.
    Dan

    The command Varun is right.
    Since you want to know a little bit more about this stuff, here goes a bit. Every interface will have a secondary IP and a Primary IP where the Active/Standby pair will exchange hello packes. If the hellos are not heard from mate, the the unit is delcare failed.
    In case the primary is the one that gets an interface down, it will failover to the other unit, if it is the standby that has the problem, the active unit will declare the other Unit "standby failed). You will know that everything is alright when you do a show failover and the standby pair shows "Standby Ready".
    For configuring it, just put a secondary IP on every interface to be monitored (If by any chance you dont have an available secondary IP for one of the interfaces you can avoid monitoring the given interface using the command no "monitor-interface nameif" where the nameif is the name of the interface without the secondary IP.
    Then put the commands for failover and stateful link, the stateful link will copy the connections table (among other things) to avoid downtime while passing from One unit to another, This link should have at least the same speed as the regular data interfaces.
    You can configure the failover link and the stateful link in just one interface, by just using the same name for the link, remember that this link will have a totally sepparate subnet from the ones already used in firewall.
    This is the configuration
    failover lan unit primary
    failover lan interface failover gig0/3
    failover link failover gig0/3
    failover interface ip failover 10.1.0.1 255.255.255.0 standby 10.1.0.2
    failover lan unit secondary
    failover lan interface failover gig0/3
    failover link failover gig0/3
    failover interface ip failover 10.1.0.1 255.255.255.0 standby 10.1.0.2
    Make sure that you can ping each other secondary/primary IP and then put the command
    failover first on the primary and then on the secondary.
    That would fine.
    Let me know if you have further doubts.
    Link for reference
    http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a008080dfa7.shtml
    Mike

  • WLC HA, difference between GLOBAL- and AP- High Availability

    hello everyone,
    I have a question regarding HA and LAP...
    we have two 5508 (sw ver 6.0.199.4), on each specific AP we have an entry for which is his primary and secondary controller
    so far so good, when one controller fails, the AP is connecting to the second controller and goes on doing his business...
    so what I am not sure about is what I should configure globally regarding HA
    first question: do I have to configure anything at all?
    second question: what should I configure best? we are using our WLCs only to control APs that are connected to our (WLAN-dedicated) LAN, we are not controlling any APs at a remote-location.
    finally, let me quote the configuration-guide:
    "Follow these steps to configure primary, secondary, and tertiary controllers for a specific access point and to configure primary and secondary backup controllers for all access points."
    and the question for this:
    what is the difference between a controller and a backup-controller?
    from my point of view: if I configure a primary and a secondary controller, the secondary controller is the backup-controller for the primary controller...
    while I am writing this, I would like to apologize for what I am asking here, because at this time I am totally confused about this and to write those questions down, did not help to calm down...
    thank you very much in advance!
    regards,
    Manuel

    hi Leo,
      I tested this out, but i guess its not working as i thought it would work. I configured the backup primary controller IP and name in the global configuration of the Wireless tab of the WLC and left the AP high availability blank with no settings. I joined the AP to the WLC and show capwap client ha output on the AP shows the backup primary controller name. but if i shut down the primary controller, the AP does not join the back, it just tries to get WLC ip by renewing DHCP forever and stuck in that...   below are the outputs.. any idea why its like this ? I thot if there is no HA configured at the AP level, the global config on the controller level should take effect ?
    LWAP3-1042#sh cap cli ha
    fastHeartbeatTmr(sec)   7 (enabled)
    primaryDiscoverTmr(sec) 30
    primaryBackupWlcIp      0xA0A700A
    primaryBackupWlcName    WLC2-4402-50
    secondaryBackupWlcIp    0x0
    secondaryBackupWlcName  
    DHCP renew try count    0
    Fwd traffic stats get   0
    Fast Heartbeat sent     0
    Discovery attempt      0
    Backup WLC array:
    LWAP3-1042#
    *Apr 30 20:36:21.324: %CAPWAP-3-DHCP_RENEW: Could not discover WLC using DHCP IP. Renewing DHCP IP.
    Not in Bound state.
    *Apr 30 20:36:31.829: %DHCP-6-ADDRESS_ASSIGN: Interface GigabitEthernet0 assigned DHCP address 10.10.114.49, mask 255.255.255.0, hostname LWAP3-1042
    *Apr 30 20:37:17.832: %CAPWAP-3-DHCP_RENEW: Could not discover WLC using DHCP IP. Renewing DHCP IP.
    Not in Bound state.
    *Apr 30 20:37:28.337: %DHCP-6-ADDRESS_ASSIGN: Interface GigabitEthernet0 assigned DHCP address 10.10.114.50, mask 255.255.255.0, hostname LWAP3-1042
    *Apr 30 20:38:14.338: %CAPWAP-3-DHCP_RENEW: Could not discover WLC using DHCP IP. Renewing DHCP IP.
    Not in Bound state.
    *Apr 30 20:38:24.842: %DHCP-6-ADDRESS_ASSIGN: Interface GigabitEthernet0 assigned DHCP address 10.10.114.51, mask 255.255.255.0, hostname LWAP3-1042
    regards
    Joe

  • 2xC350 in High Availability Mode (Cluster Mode)

    Hello all,
    first of all, i`m a newbie in ironport. So Sorry for my basic questions, but i can`t find anything in the manuals.
    I want to configure the two boxes in High Availability Mode (Cluster Mode) but i don`t understand the ironport cluster architecture.
    1) in machine mode i can configure IP-Adresses -> OK
    2) in Clustermode i can configure listeners and bind them to a IP-Address -> OK
    But how works the HA?
    A) Should i configure on both boxes the same IP to use one MX Record? And if one box is down the other takes over?
    B) Or should i configure different IPs and configure two MX Records?
    And if one box is down the second MX will be used.
    Thanks in advance
    Michael

    The ironport clustering is for policy distribution only - not for smtp load mgmt.
    A) Should i configure on both boxes the same IP to use one MX Record? And if one box is down the other takes over?
    Could do - using NAT'ing on the f/w but few large business take this approach today.
    Many/most large businesses use a HW loadbalancer like an F5, Foundry ServerIron, etc. The appliances themselves would be set up on seperate IP addresses. Depending on the implementation requirements, the internal IP address could be a public IP or a private IP.
    B) Or should i configure different IPs and configure two MX Records?
    And if one box is down the second MX will be used.
    If you set up two boxes, even with a different MX preference, mail will be delivered to both MX records. There are broken SMTP implementations that get the priority backwards, and many spammers will intentionally attempt to exploit less-restrictive accept rules on secondary MX recievers and will send to them first.

Maybe you are looking for