N+1 High Availability Firewall requirements

I am deploying a 5508 in one DC and another 5508 in another DC running the HA-SKU license
Does anyone know what ports need to be opened for communication between these two controllers?
Thanks
Roger

Hi Roger,
Below gives general port numbers you need to open for intercontroller communication. In your case you need to open UDP 5246 & 5247 as well since AP communicating with the N+1 WLC sits behind firewall.
http://www.cisco.com/c/en/us/support/docs/wireless/4400-series-wireless-lan-controllers/107458-wga-faq.html#qa7
Also following link should give some additional  port information as well
http://www.cisco.com/c/en/us/support/docs/wireless/5500-series-wireless-controllers/113344-cuwn-ppm.html#t4
HTH
Rasika
*** Pls rate all useful responses ****

Similar Messages

  • High availability - minimum requirements

    Hi,
    We currently have TFS2012 installed with SQL2008 on a single server. After a recent outage that required a server rebuild I'm looking to lower the risk of further outages and I've been reading about high availability options for TFS. I'm quite interested
    in the SQL AlwaysOn feature which allows you to run 2 mirrors of your TFS SQL database across 2 servers. My question is can I have TFS on the same servers? I.e.:
    Node A: TFS and SQL
    Node B: TFS and SQL
    I could then NLB the TFS front ends and have SQL in a HA AlwaysOn availability group. Is this possible? I would use the opportunity to upgrade to SQL2012/2014 and TFS 2013.
    Most articles I've read talk about separating SQL out onto it's own hardware - this would require 2 additional servers minimum to run SQL and most likely a 3rd to have an additional TFS front end.
    Other options would be to virtualise the above (perhaps in the future) or to use TFS online - although we have customised our TFS quite a bit so online may not be an option for us.
    Thanks in advance.

    Hi Ceefla, 
    Thanks for your post. 
    According the information in this
    document, you need install SQL Server 2012 for TFS Server if you want use SQL Server Always On Availability Groups feature.
    What’s the “have TFS on the same servers” mean?  Install TFS and SQL Server on the same server machine? We suggest you install the TFS and SQL Server on the different server machines.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Oracle Version - High Availability Installation

    Hi,
    We are shortly to install SAP on Windows2008/Oracle.
    The DEV and TEST systems are to be installed as stand alone Centralised installations.
    The LIVE system is to be installed as a High Availability system, clustered over two hosts.
    I have questions surrounding the Oracle Editions:
    Looking at the differences between the two Oracle editions in respect to our requirements, Enterprise supports fail-over, whereas Standard does not.
    So,
    Does an SAP high availability installation require Oracle Enterprise Edition to be installed or can it be Oracle Standard Edition?
    Does SAP's High Availability installation functionality handle the complete failover, regardless of Oracle Version?
    Thanks in advance for your assistance.
    Chris

    > Does an SAP high availability installation require Oracle Enterprise Edition to be installed or can it be Oracle Standard Edition?
    SAP systems (also for test and QA)  always require Oracle Enterprise edition. You may get it technically to work with the standard edition but it's not supported:
    Note 105047 - Support for Oracle functions in the SAP environment
    <...>
          56. Standard Edition
        * SAP products always require the Oracle Enterprise edition, using Oracle Standard edition is not permitted.
    <...>
    Markus

  • Hardware requirements to setup high available SOA system on production env

    Hi All,
    Can anyone tell me the recommended hardware requirements to setup a highly available SOA system on production machine?
    Thanks
    Jahangir

    We have a 2 managed servers setup in 2 different VMs (SuSe Linux )
    The minimum recommended hardware and system configuration requirements for each VM on which Oracle SOA Suite, Oracle Service Bus, Oracle BAM
    components are installed is:
    •     4 CPU Cores; each with a clock speed of 1.5 GHz or greater
    •     8 GB RAM (Physical Memory)
    •     Disk Space: 50 GB SAN Disk Space on each VM
    Please note we have separate domain for all three components. Typically JVM size is 1GB for admin server and 2GB for managed servers. With this configuration, the memory usage is always around 90%.. If you want to reduce the memory usage %, you may have to increase the RAM size..

  • Outlook Anywhere, high availability cas issue.

    We are beginning a deployment of a 3 node exchange 2013 dag/cas group and have hit a snag with outlook anywhere.
    the configuration:
    SIDHEX01A - DAG member and CAS
    SIDHEX02A - DAG member and CAS
    SIDHEX03A - DAG member and CAS
    We have one live database running on SIDHEX02A with passive copies on 1 and 3. We have 5 live users functioning as a test group.
    DNS: we have internal DNS set to resolve mail.domain.net as all three servers. We have public DNS set to resolve mail.domain.net to an external ip on our firewall. From there we forward ports 25, 80, and 443 to server SIDHEX02A.
    All 3 servers have Certs for mail.domain.net applied to iis and smtp.
    We have set the external ( and internal names) of all the exchange services to mail.domain.net including owa, ecp, outlook anywhere, oab, activesync, and ews.
    Our next step plan was to replace the direct port forward with a load balancer completing the high availability cas, when we hig hthe following snag:
    The DAG works prefectly.
    OWA, activesync, ecp, oab, ews all function fine.incomming and outgoing mail works perfectly. we simulate cas load balancing by changing the port forwarding to direct to a different cas server.
    We are however experiencing some weird behavior with outlook anywhere.
    We have a large number of users that work completely off-prem and will be using outlook with out ever connecting to the internal network at all. When we try to configue outlook to connect to the CAS we are trying to use the following:
    server: mail.domain.net
    exchange proxy: mail.domain.net
    This doesn't work, the outlook client never finds the server.
    testexchange reports that it is unable to ping the proxy server (it is pingable) after corectly checking certs and doing all the other communications.
    Attempting to ping RPC proxy mail.domain.net.
      RPC Proxy can't be pinged.
     Additional Details
    A Web exception occurred because an HTTP 404 - NotFound response was received from Unknown.
    Headers received:
    request-id: e9462008-75e2-4990-9d0f-a6c5514f5bce
    X-CasErrorCode: EndpointNotFound
    Persistent-Auth: true
    X-FEServer: SIDHEX02A
    Content-Length: 0
    Cache-Control: private
    Date: Mon, 30 Dec 2013 02:46:34 GMT
    Server: Microsoft-IIS/8.0
    X-AspNet-Version: 4.0.30319
    X-Powered-By: ASP.NET
    Elapsed Time: 21308 ms. 
    Now if we tell the outlook client to use SIDHEX02A as the server name outlook will resolve the mailbox and everything apears great, except for two issues.
    With a load balancer implimented we wouldnt know what the target server name was.
    Secondly, we see some sort of link in outlook to the internal server name. On a cas failover outlook functions perfectly until it is closed and reopens with around a 60 second "trying to connect" before finally resolving the mailbox. This apears to
    be permanent, if we start an outlook profile on sidhex03a for example, it will have 60 sec delays on 01 or 02, but never 03. Its as if the client is looking for its "home server" before considering alternitives.
    Its posible that the load balancer would resolve some of this and we are happy to do that, we have also looked at ARR since this blog post seems to imply its required.
    http://blogs.technet.com/b/exchange/archive/2013/07/19/reverse-proxy-for-exchange-server-2013-using-iis-arr-part-1.aspx but we dont want to add complexity over a problem until we understand why its needed.
    Also of note: we dont have a cert for autodiscover.domain.net, nor is it publicly resolvable.
    we dont have any 2010 servers, this is a clean 2013 envroment.
    Thank you for taking the time to review this issue, I will provide any follow up info I can.

    Hi,
    Firstly, I’d like to explain, the host name autodiscover.domain.com should be in the certificate to ensure Autodiscover service can work externally.
    And I’d like to confirm if the test users are used internally and how about directly accessing the following URL:
    https://autodiscover.domain.com/autodiscover/autodiscover.xml
    Additionally, according to your description, you forward ports 25, 80, and 443 to server SIDHEX02A.
    Then to understand more about the issue, I’d like to confirm if you set all SMTP,HTTP and HTTPS requests forward to the server SIDHEX02A and how to achieve that. Maybe this is the key point of the issue.
    Thanks,
    Angela Shi
    TechNet Community Support

  • IPS 4240 High Availability?

    Hello there,
    Does 4240 work in HA mode?
    Or do I have to look at 4255 if I need them to work in HA mode?
    Kindly help me with this info..thanks in advance.
    Regards,
    Ram

    Just to add a little bit to Bob's response.  It is possible to get HA, but like mentioned above, it's not HA like you would expect from a firewall, and requires significant network planning and is pretty technical in nature.
    The best documentation I have been able to find regarding HA designs is in Chapter 21 - "Deploying Cisco IPS for High Availability and High Performance"  of the CCNP Security IPS 642-627 Official Cert Guide, ISBN: 9780132372107.  It gets pretty detailed and explains a lot of different methods. 
    I was also able to find some information on this site, but it's at a higher level, and doesn't provide as many options.
    https://www.networkworld.com/community/node/18384
    I've had to work HA into some of our environments, and I'm here to tell ya, plan ahead, way ahead, test several methods to find the best one.  We ended up using a method that I couldn't find mentioned anywhere. 

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • SharePoint Foundation 2013 - Can we use the foundation for intranet portal with high availability ( medium farm)

    Today I had requirement, where we have to use the SharePoint Foundation 2013 (free version) to build an intranet portal ( basic announcement , calendar , department site , document management - only check-in check-out / Version).
     Please help me regarding the license  and size limitations. ( I know the feature comparison of Standard / Enterprise) I just want to know only about the installation process and license.
    6 Server - 2 App / 2 Web / 2 DB cluster ( so total license 6 windows OS license , 2 SQL Server license and Guess no sharepoint licenes)

    Thanks Trevor,
    Is load balance service also comes in free license... So, in that case I can use SharePoint Foundation 2013 version for building a simple Intranet & DMS ( with limited functionality).  And for Workflow and content management we have to write code.
    Windows Network Load Balancing (the NLB feature) is included as part of Windows Server and would offer high availability for traffic bound to the SharePoint servers. WNLB can only associate with up to 4 servers.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • OIM 11g High Availability Deployment

    Hi Experts,
    I'm deploying OIM 11g in High Available schema, following Oracle docs: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF, I have succesfully installed and configured OIM & SOA in weblogic domain on 'OIMHOST1', trying to propagate the configuration from 'OIMHOST1' to 'OIMHOST2' I have packed (using pack.sh) the domain on 'OIMHOST1' and unpacked (using unpack.sh) it to 'OIMHOST2' so I have updated the NodeManager executing setNMProps.sh and finally Ihave started the NodeManager. In order to Test everything is fine and following the documentation I'm traying to perform the following steps, but I'm not succeed
    I'M MUST TO SAY THAT I'M RUNNING ON SINGLE STANDARD EDITION DB INSTANCE AND NOT RAC AS MENTIONED IN ORACLE DOCS, PLEASE CLARIFY IF RAC IS REQUIRED, FOR NOW I'M IN DEVELOPMENT ENVIRONMENT, SO I THINK RAC IS NOT REQUIRED FOR NOW, PLEASE CLARIFY
    8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
    Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
    Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
    Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1&
    Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console.
    Here its not possible start AdminServer on OIMHOST2, first of all, it looks like boot.properties file under WLS_OIM_DOMAIN_HOME/servers/AdminSever/security is not valid, the first time I try to execute startWeblogic.sh script, it ask for username/password, I have updated boot.properties (vi boot.properties) and manually set clear username and password, this time startWeblogic.sh script passed this stage, but fails:
    <Error> <util.install.help.BuildMasterHelpSet> <BEA-000000> <IOException ioe java.io.IOException: No such file or directory>
    <Error> <oracle.adf.share.config.ADFMDSConfig> <BEA-000000> <MDSConfigurationException encountered in parseADFConfigurationMDS-01330: unable to load MDS configuration document
    MDS-01329: unable to load element "persistence-config"
    MDS-01370: MetadataStore configuration for metadata-store-usage "writeable" is invalid.
    MDS-00503: The metadata path "/u01/app/oracle/product/Middleware/user_projects/domains/IDMDomain/sysman/mds" does not contain any valid directories.
    I have verified that this directory "mds" does not exists, as reported by the IOException, in OIMHOST2, but it exists in OIMHOST1. from here its not possible for me following Oracle's documentation, I test this starting Adminserver in OIMHOST1, and starting WLS_SOA2 and WLS_OIM2 managed servers from OIMHOST1 AdminServer console, I have tested 2 ways:
    1.- All managed servers in OIHOST1 are shutdown, for this, managed servers in OIMHOST2 works as expected
    2.- All managed servers in OIMHOST1 are RUNNING, for this, first I have started SOA2 managed server, after that, I have fired OIM2 managed server, when it finish boot process the following message appears in server's output:
    <Warning> <org.quartz.impl.jdbcjobstore.JobStoreCMT> <BEA-000000> <This scheduler instance (servername.domainname1304128390936) is still active but was recovered by another instance in the cluster. This may cause inconsistent behavior.>
    Start the WLS_SOA2 managed server using the WebLogic Administration Console.
    Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started.
    8.9.3.9 Validate the Oracle Identity Manager Instance on OIMHOST2
    Validate the Oracle Identity Manager Server instance on OIMHOST2 by bringing up the Oracle Identity Manager Console using a web browser.
    The URL for the Oracle Identity Manager Console is:
    http://oimvhn2.mycompany.com:14000/oim
    Log in using the xelsysadm password.
    Your help is highly apprecciated
    Regards
    Juan

    Hi Vaasu,
    I have succeeded deploying OIM in HA, just now my customer and I are working on the installation of webtier. Now I have a better understand about HA concepts and the way weblogic works -really nice, but little tricky-
    All the magic about HA is configuring properly the network interfaces in each Linux boxes (our case) so, first of all you need to create 2 new floating IP's on each Linux boxes (google: how to create virtual Ip in linux, if you don't know) clone and modify your 'eth0' network script to create the virtual IPs
    Follow the procudere in the HA guide: http://download.oracle.com/docs/cd/E14571_01/core.1111/e10106/imha.htm#CDEFECJF
    create DB schemas with RCU
    install weblogic
    install SOA
    patch SOA
    install IAM
    ---if you are working on a virtual machine is good idea to take a snapshot here---
    Create and configure the weblogic domain (special attentention whe configuring the cluster), see step 13 of 8.9.3.2 Creating and Configuring the WebLogic Domain for OIM and SOA on OIMHOST1, here you need to cofigure:
    For the oim_server1 entry, change the entry to the following values:
    Name: WLS_OIM1
    Listen Address: the IP that is confured in eth0:1 of Linux box1
    Listen Port: 14000
    For the soa_server1 entry, change the entry to the following values:
    Name: WLS_SOA1
    Listen Address: the IP configure on eth0:2 of Linux box1
    Listen Port: 8001
    For the second OIM Server, click Add and supply the following information:
    Name: WLS_OIM2
    Listen Address: the IP configured on eth0:1 of Linux box2
    Listen Port: 14000
    For the second SOA Server, click Add and supply the following information:
    Name: WLS_SOA2
    Listen Address: the IP configured on eth0:2 of Linux box2
    Listen Port: 8001
    Click Next.
    On Step 16 ensure you are using the UNIX tab to configure the machines, also ensure that for machine1 you use the IP configured on the eth0 interface of Linux box1, the same for machine2
    please confirm you have performered 8.9.3.3.2 Update Node Manager on OIMHOST1
    if everything is ok you must be able to start the AdminServer as described in the guide.
    configure OIM: 8.9.3.4.2 Running the Oracle Identity Management Configuration Wizard, in my case I don't need LDAPsync, I have skipped this section, if you configure properly OIM, then you mus perform 8.9.3.5 Post-Configuration Steps for the Managed Servers
    resrtar AdminServer then from the weblogic console, start OIM and SOA if node manager is properly configured SOA and OIM must run properly, update deployment mode and coherence as described in the guide and verify that OIM run perfectly in Linux box1.
    Propagate OIM from Linux box1 to Linux box2 as described in the guide, using pack and unpack (you MUST use the same filesystem directory structure on both Linux boxes)
    Update and start NodeManager as described in the guide
    VERY IMPORTAN OBSERVATION
    the guide say:
    8.9.3.8.3 Start the WLS_SOA2 and WLS_OIM2 Managed Servers on OIMHOST2
    Follow these steps to start the WLS_SOA2 and WLS_OIM2 managed servers on OIMHOST2:
    Stop the WebLogic Administration Server on OIMHOST2. Use the WebLogic Administration Console to stop the Administration Server.
    JUAN OBSERVATION:
    IS NOT POSSIBLE TO START OR STOP ADMINSERVER ON HOST2 SINCE ADMIN SERVER WERE CONFIGURED TO LISTEN ON THE IP ADDRES OF eth0 INTERFACE ON HOST1, SO, ITS NOT POSSIBLE TO PLAY IT ON HOST2, I THINK AND ADDITIONAL PROCEDURE SHOULD BE FOLLOWED TO CONFIGURE ADMINSERVER IN HA IN A ACTIVE-PASSIVE MODE
    Start the WebLogic Administration Server on OIMHOST2 using the startWebLogic.sh script under the $DOMAIN_HOME/bin directory. For example:
    /u01/app/oracle/admin/OIM/bin/startWebLogic.sh > /tmp/admin.out 2>1& -----NOT APPLICABLE
    Validate that the WebLogic Administration Server started up successfully by bringing up the WebLogic Administration Console. -----NOT APPLICABLE
    Start the WLS_SOA2 managed server using the WebLogic Administration Console. ----START SOA2 FROM THE CONSOLE RUNNING ON HOST1, IT DOESN'T MATTER
    Start the WLS_OIM2 managed server using the WebLogic Administration Console. The WLS_OIM2 managed server must be started after the WLS_SOA2 managed server is started. ------ START OIM2 FROM THE CONSOLE RUNNING ON HOST1
    HERE YOU MUST BE ABLE TO LOGIN TO OIM2 SERVER AS DESCRIBED IN THE GUIDE, YOU DON'T NEED TO EXECUTE config.sh SCRIPT THIS SHOULD WORK AS DESCRIBED.
    Server migration should work straight-forward if you have configured the floating IPs as described, I have not configured the persistence yet since my customer does not have the skills to share a storage.
    I hope this helps, and feel free to comment or complement.
    By the way, did you know how to set up a valid SSL certificate in Windows 2003 server??? I need it to test and Exchange 2007 I'm tryin to integrate
    Regards
    Juan

  • High Availability of BPEL System

    We are having a High Availability architecture configuration for the BPEL System in our production environment.
    The BPEL servers are clustered in the middle tier of the architecture and RAC is used in the database tier of the architecture.
    We have 5 BPEL processes which are getting invoked within each other. For eg:
    BPELProcess1 --> BPELProcess2 --> BPELProcess3, BPELProcess4 &
    BPELProcess4 --> BPELProcess5
    Now when all the above BPEL processes are deployed on both the nodes of the BPEL server, how do we handle the end point URL's of these BPEL servers.
    Should we hardcode the end point URL in the invoking BPEL process or should we replace the IP address of the two nodes of the BPEL server with the IP address of the load balancer.
    If we replace the IP address of the BPEL server with the IP address of the load balancer, it will require us to modify, redeploy and retest all the BPEL processes again.
    Please advise
    Thanks

    The BPEL servers are configured with active - active topology and RAC is used in the database tier of the architecture.
    BPEL Servers is not clustered. Load Balancer is used in front of the two nodes of the BPEL servers.

  • Exchange 2010 to Exchange 2013 Migration and Architect a resilient and high availability exchange setup

    Hi,
    I currently have a single Exchange 2010 Server that has all the roles supporting about 500 users. I plan to upgrade to 2013 and move to a four server HA Exchange setup (a CAS array with 2 Server as CAS servers  and one DAG with 2 mailbox Servers). My
    goal is to plan out the transition in steps with no downtime. Email is most critical with my company.
    Exchange 2010 is running SP3 on a Windows Server 2010 and a Separate Server as archive. In the new setup, rather than having a separate server for archiving, I am just going to put that on a separate partition.
    Here is what I have planned so far.
    1. Build out four Servers. 2 CAS and 2 Mailbox Servers. Mailbox Servers have 4 partitions each. One for OS. Second for DB. Third for Logs and Fourth for Archives.
    2. Prepare AD for exchange 2013.
    3. Install Exchange roles. CAS on two servers and mailbox on 2 servers. Add a DAG. Someone had suggested to me to use an odd number so 3 or 5. Is that a requirement?
    4. I am using a third party load balancer for CAS array instead of NLB so I will be setting up that.
    5. Do post install to ready up the new CAS. While doing this, can i use the same parameters as assigned on exchange 2010 like can i use the webmail URL for outlook anywhere, OAB etc.
    6. Once this is done. I plan to move a few mailboxes as test to the new mailbox servers or DAG.
    7. Testing outlook setups on new servers. inbound and outbound email tests.
    once this is done, I can migrate over and point all my MX records to the new servers.
    Please let me know your thoughts and what am I missing. I like to solidify a flowchart of all steps that I need to do before I start the migration. 
    thank you for your help in advance

    Hi,
    okay, you can use 4 virtual servers. But there is no need to deploy dedicated server roles (CAS + MBX). It is better to deploy multi-role Exchange servers, also virtual! You could install 2 multi-role servers and if the company growths, install another multi-role,
    and so on. It's much more simpler, better and less expensive.
    CAS-Array is only an Active Directory object, nothing more. The load balancer controls the sessions on which CAS the user will terminate. You can read more at
    http://blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx Also there is no session affinity required.
    First, build the complete Exchange 2013 architecture. High availability for your data is a DAG and for your CAS you use a load balancer.
    On channel 9 there is many stuff from MEC:
    http://channel9.msdn.com/search?term=exchange+2013
    Migration:
    http://geekswithblogs.net/marcde/archive/2013/08/02/migrating-from-microsoft-exchange-2010-to-exchange-2013.aspx
    Additional informations:
    http://exchangeserverpro.com/upgrading-to-exchange-server-2013/
    Hope this helps :-)

  • Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
    were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
    hosts are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
    found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
    are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
    Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
    and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
    Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course). 
    Hi VR38DETT,
    Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
    hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try.

  • Oracle EBS R12.1.3 High Availability with RAC 11g on VMWARE

    Dear All,
    Our customer requirement is of high availability and good utilization of hardware for EBS R12.13 implementation.As per our strategy we want to use Oracle VM(I think it doesn't require any licensing) , RAC 11gR2 and appstier load balancing.We have only two servers to achieve all these.One one node there will be two VMs one for dbtier and another one for appstier and same will be on second server.So this way we will be having four virtual machines on two physical nodes.For all these we will use OEL Linux as Operating system
    Please share your experiences , if above deployment model is correct or need further enhancements.
    Regards.

    Our customer requirement is of high availability and good utilization of hardware for EBS R12.13 implementation.As per our strategy we want to use Oracle VM(I think it doesn't require any licensing) , Please contact your Oracle Sales representative for license questions.
    RAC 11gR2 and appstier load balancing.We have only two servers to achieve all these.One one node there will be two VMs one for dbtier and another one for appstier and same will be on second server.So this way we will be having four virtual machines on two physical nodes.For all these we will use OEL Linux as Operating system
    Please share your experiences , if above deployment model is correct or need further enhancements.Please see these docs.
    Using Oracle VM Templates for Oracle E-Business Suite [ID 975734.1]
    Using Oracle VM with Oracle E-Business Suite Release 11i or Release 12 [ID 465915.1]
    Oracle E-Business Suite High Availability on Oracle VM [ID 986690.1]
    Certified Software on Oracle VM [ID 464754.1]
    Oracle VM General Frequently Asked Questions [ID 464756.1]
    Thanks,
    Hussein

  • Migrating Dimension Server from local installation to High Availability

    Hi Team,
    When Hyperion was installed on the client site (11.1.2.3), there was no requirement for EPMA to become highly available.
    Now that has become a requirement to make it highly available in the event that one of the virtual machine is unreachable.
    In the event that a server becomes unavailable, it is acceptable for a manual failover procedure to be done.
    What steps can be taken to prepare for such an event?
    Possible outcome:
    Configure another dimension server on the second Virtual machine and leave it manually switched off.
    Modify the IIS path to point to a clustered location accessible by both machines.
    In the event of a failover, manually bring up the dimension server on the other machine.
    This is yet to be implemented successfully.
    Current Scenerio:
    1 x Windows Machine hosting:
      Foundation Services
      Calculation Manager
      Performance Architect
      Planning
      Reporting and Analysis
      Repository location on a shared mount (Clustered filesystem)
      FDQM
      Essbase
      Analytic Provider Services
      Essbase Integration Services (not used)
      Essbase Studio (not used)
    1 x Linux Machine
      Essbase Server
      Arborpath on a shared mount
    Aim: Add 2 new Virtual Machines ( 1 x Windows App Server + 1 x Linux Essbase Server) for high availability.
    Progress:
      Windows Machine components clustered:
      Foundation Services
      Calculation Manager
      Planning
      Reporting and Analysis
      Essbase
      Analytic Provider Services
      Linux Essbase Machines:
      Configured Active/Passive using OPMN
    Challenge faced:
    Failover scenerio can be a manual list of steps to provide functionaility.
    Please provide support on introducing failover steps for dimension server.

    Remote Desktop/Preferences/Task Server
    Add the FQDN or I.P address of your server in this pane on your book(basically your pointing your book to your installed software on the other machine). Setup your server with the installs you want it to make, and administer it via your book (or any machines with another unlimited ARD) IF you want report generation, you would schedule the clients to send the info to the server. A task server can only do 2 things, install packages or change client settings, thats it.

  • SQL Server Analysis Services (SSAS) 2012 High Availability Solution in Azure VM

    I have been testing an AlwaysOn high availability failover solution in SQL Server Enterprise on an Azure VM, and this works pretty well as a failover for SQL Server Databases, but I also need a high availability solution for SQL Server
    Analysis Server, and so far I haven't found a way to do this.  I can load balance it between two machines, but this isn't working as a failover and because of the restriction of not being able to use shared storage in a Failover Cluster in Azure
    VM's, I can't set it up as a cluster which is required for AlwaysOn in Analysis Services. 
    Anyone else found a solution to use an AlwaysOn High Availability for SQL Analysis Services in Azure VM?  As my databases are read-only, I would be satisfied with even just a solution that would sync the OLAP databases and switch
    the data connection to the same server as the SQL databases.
    Thanks!
    Bill

    Bill,
    So, what you need is a model like SQL Server failover cluster instances. (before sql server 2012)
    In SQL Server 2012, AlwaysOn replaces SQL Server failover cluster, and it has been seperated to
    AlwaysOn Failover Cluster Instances (SQL Server) and
    AlwaysOn Availability Groups (SQL Server).
    Since your requirement is not in database level, I think the best option is to use AlwaysOn Failover Cluster Instances (SQL Server).
    As part of the SQL Server AlwaysOn offering, AlwaysOn Failover Cluster Instances leverages Windows Server Failover Clustering (WSFC) functionality to provide local high availability through redundancy at the server-instance level—a
    failover cluster instance (FCI). An FCI is a single instance of SQL Server that is installed across Windows Server Failover Clustering (WSFC) nodes and, possibly, across multiple subnets. On the network, an FCI appears to be an instance of SQL
    Server running on a single computer, but the FCI provides failover from one WSFC node to another if the current node becomes unavailable.
    It is similar to SQL Server failover cluster in SQL 2008 R2 and before.
    Please refer to these references:
    Failover Clustering in Analysis Services
    Installing a SQL Server 2008 R2 Failover Cluster
    Iric Wen
    TechNet Community Support

Maybe you are looking for

  • Upgrade problem or windows

    Hi just upgraded to 8.02 and noticed that I now when previewing a flash item " press space bar or ...... to enable" if I click once the item becomes live and a second click carries out the action. Is this a windows thing or flash/dw? cheers Ian

  • How can I transfer my pictures and videos from Zenfolio?

    How can I transfer my pictures and videos from Zenfolio?

  • Video file must have resulotion of...

    hi I have a PAL project and im trying to import a footage that has a diffarent resulotion a small quicktime movie.. why cant I import the file at all? Asaf

  • This is a really interesting topic...

    hi people... I'm currently reading through literature about using the default DOM in java 1.4 to read/write XML documents. But there's no mention of reading RDF (resource description framework) in XML documents. Do i just use the normal way (ie NodeL

  • How do you include the actual CLOB value as ASCII when writing XML?

    When using the XDK for PL/SQL ( creating XML from query ) and the table that you query has a CLOB column the value in the resulting XML document is written as: <column>(CLOB)</column> how do you get the api to write the actual CLOB data?